Re: Changes to dependency graph during boot

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 6/24/19 7:53 PM, Andrei Borzenkov wrote:
> 24.06.2019 17:41, Conrad Hoffmann пишет:
>> Hi,
>>
>> TL;DR: I was wondering what happens if a unit executed early during the
>> boot process changes the current dependency graph by either enabling or
>> even starting another unit that was previously disabled. Is this defined
>> behaviour, and if so what are the rules?
>>
> 
> Enabling unit should have no impact on existing job queue as
> dependencies are evaluated when job is submitted.

Thanks for confirming that.

> Starting unit will simply add its job and all its dependencies to
> current job queue unless it results in conflict. Conflicts inside single
> "transaction" (i.e. dependency closure built on job submit) are blocked,
> but between different jobs I am not sure. It may be rejected or it may
> cancel current jobs.

What constitutes a conflict here? A dependecy re-calculation would not
result in any conflict, but finishing the current queue (boot) first and
then starting with "new" unit and it's dependencies would lead to
incorrect dependencies. So I take it that's conflict already?



>> Longer version:
>>
>> I am looking at an issue that seems to be with cloud-init in combination
>> with Arch Linux. Arch Linux uses systemd, and cloud-init is executed
>> (via units) multiple times during boot (running different cloud-init
>> modules at each stage). The idea is that one of the early cloud-init
>> modules writes a network config and the next stages' systemd unit
>> depends on network.target, so that e.g. systemd-networkd would be
>> started in between, reading the generated config and setting up the
>> network accordingly.
>>
>> However, the Arch Linux implementation in cloud-init uses the netctl [1]
>> tool, which works a bit differently: there is a dedicated unit file for
>> each connection (called profile), and netctl can be used to switch
>> between them (or have multiple enabled at the same time). This has the
>> effect that since you don't know the network configuration in advance,
>> you also don't know what profiles/units to enable for boot, since they
>> will only be generated on first boot by the cloud-init service. As such,
>> the cloud-init code does what would seem like a reasonable idea: in
>> addition to generating the units for each connection, it also runs
>> `systemctl enable` for them. However, this does not seem to working. My
>> observations are that this does work on _second_ boot, however not on
>> the first one. I even tested runnig `daemon-reload` after `enable`, but
>> to no avail.
>>
>> There are multiple ways on how the code could be made to work. But the
>> question of what to expect when running systemctl commands during boot
>> seemed both general and important enough (also in the wider context of
>> cloud-init) that I figured I should get some professional input before
>> making any assumptions :)
>>
>> So the questions would be: a service executed by systemd during boot ...
>>
>> • *enables* a previously disabled unit, what happens/should happen?
>> • *starts* a previously disabled unit, what happens/should happen?
>>
>> In both cases, the implication is that the unit to be enabled/started
>> causes non-trivial changes to the dependecy graph.
>>
>> [1] https://wiki.archlinux.org/index.php/Netctl
>>
>> Thanks a lot,
>> Conrad
>> _______________________________________________
>> systemd-devel mailing list
>> systemd-devel@xxxxxxxxxxxxxxxxxxxxx
>> https://lists.freedesktop.org/mailman/listinfo/systemd-devel
>>
> 
> _______________________________________________
> systemd-devel mailing list
> systemd-devel@xxxxxxxxxxxxxxxxxxxxx
> https://lists.freedesktop.org/mailman/listinfo/systemd-devel
> 
_______________________________________________
systemd-devel mailing list
systemd-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/systemd-devel




[Index of Archives]     [LARTC]     [Bugtraq]     [Yosemite Forum]     [Photo]

  Powered by Linux