On Wed, Jul 1, 2020 at 9:40 AM Peter Rajnoha <prajnoha@xxxxxxxxxx> wrote: > > On 7/1/20 2:03 PM, Neal Gompa wrote: > > On Wed, Jul 1, 2020 at 8:00 AM Peter Rajnoha <prajnoha@xxxxxxxxxx> wrote: > >> > >> On 6/30/20 9:35 PM, Igor Raits wrote: > >>> On Tue, 2020-06-30 at 15:18 -0400, Ben Cotton wrote: > >>>> https://fedoraproject.org/wiki/Changes/SID > >>> > >>>> == Summary == > >>>> Introduce Storage Instantiation Daemon (SID) that aims to provide a > >>>> central event-driven engine to write modules for identifying specific > >>>> Linux storage devices, their dependencies, collecting information and > >>>> state tracking while > >>>> being aware of device groups forming layers and layers forming whole > >>>> stacks or simply creating custom groups of enumerated devices. SID > >>>> will provide mechanisms to retrieve and query collected information > >>>> and a possibility to bind predefined or custom triggers with actions > >>>> for each group. > >>> > >>>> == Owner == > >>>> * Name: [[User:prajnoha | Peter Rajnoha]] > >>>> * Email: prajnoha@xxxxxxxxxx > >>> > >>>> == Detailed Description == > >>>> Over the years, various storage subsystems have been installing hooks > >>>> within udev rules and calling out numerous external commands for them > >>>> to be able to react on events like device presence, removal or a > >>>> change in general. However, this approach ended up with very complex > >>>> rules that are hard to maintain and debug if we are considering > >>>> storage setups where we build layers consisting of several underlying > >>>> devices (horizontal scope) and where we can stack one layer on top of > >>>> another (vertical scope), building up diverse storage stacks where we > >>>> also need to track progression of states either at device level or > >>>> group level. > >>> > >>>> SID extends udevd functionality here in a way that it incorporates a > >>>> notion of device grouping directly in its core which helps with > >>>> tracking devices in storage subsystems like LVM, multipath, MD... > >>>> Also, it provides its own database where records are separated into > >>>> per-device, per-module, global or udev namespace. The udev namespace > >>>> keeps per-device records that are imported and/or exported to/from > >>>> udev environment and this is used as compatible communication channel > >>>> with udevd. The records can be marked with restriction flags that aid > >>>> record separation and it prevents other modules to read, write or > >>>> create a record with the same key, hence making sure that only a > >>>> single module can create the records with certain keys (reserving a > >>>> key). > >>> > >>>> Currently, SID project provides a companion command called 'usid' > >>>> which is used for communication between udev and SID itself. After > >>>> calling the usid command in a udev rule, device processing is > >>>> transferred to SID and SID strictly separates the processing into > >>>> discrete phases (device identificaton, pre-scan, device scan, > >>>> post-scan). Within these phases, it is possible to decide whether the > >>>> next phase is executed and it is possible to schedule delayed actions > >>>> or set records in the database that can fire triggers with associated > >>>> actions or records which are then exported to udev environment > >>>> (mainly > >>>> for backwards compatibility and for other udev rules to have a chance > >>>> to react). The scheduled actions and triggers are executed out of > >>>> udev > >>>> context and hence not delaying the udev processing itself and > >>>> improving issues with udev timeouts where unnecessary work is done. > >>> > >>>> A module writer can hook into the processing phases and use SID's API > >>>> to access the database as well as set the triggers with actions or > >>>> schedule separate actions and mark devices as ready or not for use in > >>>> next layers. The database can be used within any phase to retrieve > >>>> and > >>>> store key-value records (where value could be any binary value in > >>>> general) and the records can be marked as transient (only available > >>>> during processing phases for current event) or persistent so they can > >>>> be accessed while processing subsequent events. > >>> > >>>> == Benefit to Fedora == > >>>> The main benefit is all about centralizing the solution to solve > >>>> issues that storage subsystem maintainers have been hitting with > >>>> udev, > >>>> that is: > >>> > >>>> * providing a central infrastructure for storage event processing, > >>>> currently targeted at udev events > >>> > >>>> * improving the way storage events and their sequences are recognized > >>>> and for which complex udev rules were applied before > >>> > >>>> * single notion of device readiness shared among various storage > >>>> subsystems (single API to set the state instead of setting various > >>>> variables by different subsystems) > >>> > >>>> * providing more enhanced possibilities to store and retrieve > >>>> storage-device-related records when compared to udev database > >>> > >>>> * direct support for generic device grouping (matching > >>>> subsystem-related groups like LVM, multipath, MD... or creating > >>>> arbitrary groups of devices) > >>> > >>>> * centralized solution for scheduling triggers with associated > >>>> actions > >>>> defined on groups of storage devices > >>> > >>>> * adding a centralized solution for delayed actions on storage > >>>> devices > >>>> and groups of devices (avoiding unnecessary work done within udev > >>>> context and hence avoiding frequent udev timeouts when processing > >>>> events for such devices) > >>> > >>> Is this purely about adding some package into the repositories and just > >>> to raise awarness that such tool exist? > >>> > >> > >> It's introducing a new mechanism we could use to better handle events for > >> storage devices - so yes, also raising awareness. At this moment, adding a new > >> package with the daemon and accompanying tools and the functionality disabled > >> by default. Then filling it up with modules, then thinking about making it > >> enabled by default, getting there step-by-step... > >> > > > > I'll be honest, I don't get why this exists. Most folks expect this to > > be an aspect of UDisks, so why isn't it? > > > > Udisks doesn't solve instantiation - activation, deactivation (currently a > mixture of various udev rules and various external tools do that in > decentralized way, each subsystem its own way). UDisks enumerates existing > devices and manipulates them. SID provides an abstraction on top of udev to > handle events in a way we need for storage (e.g. the grouping part and the > trigger/action part). > > To put it in a picture - udev is at the very bottom level (simple properties, > single devs, simple rules), SID sits on top of udev providing those > extensions, then udisks would be using SID to have better view of layers and > stacks, knowing which devices are usable or not. > So, I think this cuts to the heart of my question, why not fix UDisks to solve this problem? The problem with this complexity is that there's a large universe of tools that leverage UDisks and would be able to trivially take advantage of expansion of functionality in UDisks. But by making it a separate daemon and service with its own lifecycle and state machine, you have essentially doubled the work everyone has to do to handle these use-cases. It's not like desktop tools like GNOME Disks or KDE Partition Manager wouldn't need these features, it's that you are putting them in what I think is the wrong place (a separate service) and making it very difficult for those tools to adapt to it. -- 真実はいつも一つ!/ Always, there's only one truth! _______________________________________________ devel mailing list -- devel@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@xxxxxxxxxxxxxxxxxxxxxxx