1
0
mirror of https://github.com/systemd/systemd synced 2026-03-16 18:14:46 +01:00

Compare commits

...

34 Commits

Author SHA1 Message Date
Yu Watanabe
89c629fc4b
Merge pull request #19726 from poettering/path-event-symlink
teach .path units to notice events on paths with components that are symlinks
2021-05-26 10:51:00 +09:00
Yu Watanabe
b69855e645
Merge pull request #19727 from poettering/pcr-comma
Allow PCRs to be separated by "+" instead of ","
2021-05-26 10:37:24 +09:00
Yu Watanabe
95599cacd3 core/service: do not set zero error to log_unit_debug_errno()
Fixes #19725.
2021-05-26 10:23:36 +09:00
Yu Watanabe
764dca0edc dns-domain: fix build failure with libidn
Follow-up for 319a4f4bc46b230fc660321e99aaac1bc449deea.

Fixes #19723.
2021-05-26 10:23:36 +09:00
Luca Boccassi
93f235e8d8
Merge pull request #19722 from poettering/empty-string-loginctl-man
document that "loginctl kill-session" takes an empty string + add the same for per-user stuff
2021-05-25 23:23:42 +01:00
Lennart Poettering
108144adea load-fragment: validate paths properly
The comment suggests we validate paths here, but we actually didn't, we
only validated filenames. Let' fix that.

(Note this still lets any kind of paths through, including those with
".." and stuff, this is not a normalization check after all)
2021-05-25 23:19:50 +01:00
Lennart Poettering
a3f9cd27cd test: add simple test for PCR list parsing 2021-05-25 23:40:10 +02:00
Lennart Poettering
d57f6340b6 tpm2-util: accept empty string for empty PCR list 2021-05-25 23:40:01 +02:00
Lennart Poettering
a1788a69b2 tpm2: support "+" as separator for TPM PCR lists
Previously, we supported only "," as separator. This adds support for
"+" and makes it the documented choice.

This is to make specifying PCRs in crypttab easier, since commas are
already used there for separating volume options, and needless escaping
sucks.

"," continues to be supported, but in order to keep things minimal not
documented.

Fixe: #19205
2021-05-25 23:28:54 +02:00
Lennart Poettering
41cdcb5498 core: watch paths with symlinks in .path units
When watching paths that contain symlinks in some element we so far
always only watched the inode they are pointing to, not the symlink
inode itself. Let's fix that and always watch both. We do this by simply
installing the inotify watch once with and once without IN_DONT_FOLLOW.
For non-symlink inodes this just overrides the same watch twice (where
the second one replaces the first), which is has no effect effectively.
For symlinks it means we'll watch both source and destination.

Fixes: #17727
2021-05-25 23:14:38 +02:00
Lennart Poettering
d6d00b650f core: optimize loop in path_spec_fd_event()
Let's avoid the whole loop if it can never match
2021-05-25 23:14:34 +02:00
Lennart Poettering
795125cd11 core: log about all errors in path_spec_watch()
So far we logged about most, but not all errors. Adding log to all
errors.
2021-05-25 23:14:30 +02:00
Lennart Poettering
44ff2a5e9c core: align path inotify mask table a bit 2021-05-25 23:13:52 +02:00
Lennart Poettering
c473437862
Merge pull request #19322 from poettering/dep-split
core: rework dependency system to be based on atoms + add three new dep types
2021-05-25 22:07:11 +02:00
Lennart Poettering
9f48b4e40e man: documet that loginctl {terminate|kill}-{session|user} take the empty string, optionally
Fixes: #19711
2021-05-25 17:42:34 +02:00
Lennart Poettering
68892f94ae loginctl: kill calling user when invoked with empty string
A suggested by: #19711
2021-05-25 17:40:54 +02:00
Lennart Poettering
0760363274 test: add test for OnSuccess= + Uphold= + PropagatesStopTo= + BindsTo= 2021-05-25 16:06:30 +02:00
Lennart Poettering
3ba471facb test-engine: ensure atom bits are properly packed
Let's make sure all atoms are actually used, and no holes are left.
2021-05-25 16:06:27 +02:00
Lennart Poettering
99e9af257a core: reorder where we add units to queues in unit_notify()
This moves all calls that shall do deferred work on detecting whether to
start/stop the unit or dependent units after a unit state change to the
end of the function, to make things easier to read.

So far, these calls were spread all over the function, and
conditionalized needlessly on MANAGER_RELOADING(). This is unnecessary,
since the queues are not dispatched while reloading anyway, and
immediately before acting on a queued unit we'll check if the suggested
operation really makes sense.

The only conditionalizaiton we leave in is on checking the new unit
state itself, since we have that in a local variable anyway.
2021-05-25 16:03:03 +02:00
Lennart Poettering
56c5959202 core: change BoundBy= dependency handling to be processed by a deferred work queue
So far StopWhenUnneeded= handling and UpheldBy= handling was already
processed by a queue that is dispatched in a deferred mode of operation
instead of instantly. This changes BoundBy= handling to be processed the
same way.

This should ensure that all *event*-to-job propagation is done directly
from unit_notify(), while all *state*-to-job propagation is done from a
deferred work queue, quite systematically. The work queue is submitted
to by unit_notify() too.

Key really is the difference between event and state: some jobs shall be
queued one-time on events (think: OnFailure= + OnSuccess= and similar),
others shall be queued continuously when a specific state is in effect
(think: UpheldBy=).  The latter cases are usually effect of the
combination of states of a few units (e.g. StopWhenUnneeded= checks
wether any of the Wants=/Requires=/… deps are still up before acting),
and hence it makes sense to trigger them to be run after an individual
unit's state changed, but process them on a queue that runs whenever
there's nothing else to do that ensures the decision on them is only
taken after all jobs/queued IO events are dispatched, and things
settled, so that it makes sense to come to a combined conclusion. If
we'd dispatch this work immediately inside of unit_notify() we'd always
act instantly, even though another event from another unit that is
already queued might make the work unnecessary or invalid.

This is mostly a commit to make things philosophically clean. It does
not add features, but it should make corner cases more robust.
2021-05-25 16:03:03 +02:00
Lennart Poettering
116654d2cf core: make unneeded check a bit tighter
Let's not consider a unit unneeded while it is reloading.

Uneeded should be a pretty weak concept: if there's any doubt that
something bit be needed, then assume it is.
2021-05-25 16:03:03 +02:00
Lennart Poettering
7e9212bf1a core: order reverse dep table in same way as enum 2021-05-25 16:03:03 +02:00
Lennart Poettering
0bc488c99a core: implement Uphold= dependency type
This is like a really strong version of Wants=, that keeps starting the
specified unit if it is ever found inactive.

This is an alternative to Restart= inside a unit, acknowledging the fact
that whether to keep restarting the unit is sometimes not a property of
the unit itself but the state of the system.

This implements a part of what #4263 requests. i.e. there's no
distinction between "always" and "opportunistic". We just dumbly
implement "always" and become active whenever we see no job queued for
an inactive unit that is supposed to be upheld.
2021-05-25 16:03:03 +02:00
Lennart Poettering
294446dcb9 core: add new OnSuccess= dependency type
This is similar to OnFailure= but is activated whenever a unit returns
into inactive state successfully.

I was always afraid of adding this, since it effectively allows building
loops and makes our engine Turing complete, but it pretty much already
was it was just hidden.

Given that we have per-unit ratelimits as well as an event loop global
ratelimit I feel safe to add this finally, given it actually is useful.

Fixes: #13386
2021-05-25 16:03:03 +02:00
Lennart Poettering
47cd17ead4 core: use StopPropagatedFrom= as default for .mount → .device unit dependencies
Let's make use of the new dependency type for .mount/.device units,
after all we added it for this purpose.

Fixes: #9869
2021-05-25 16:03:03 +02:00
Lennart Poettering
ffec78c05b core: add new PropagateStopTo= dependency (and inverse)
This takes inspiration from PropagatesReloadTo=, but propagates
stop jobs instead of restart jobs.

This is defined based on exactly two atoms: UNIT_ATOM_PROPAGATE_STOP +
UNIT_ATOM_RETROACTIVE_STOP_ON_STOP. The former ensures that when the
unit the dependency is originating from is stopped based on user
request, we'll propagate the stop job to the target unit, too. In
addition, when the originating unit suddenly stops from external causes
the stopping is propagated too. Note that this does *not* include the
UNIT_ATOM_CANNOT_BE_ACTIVE_WITHOUT atom (which is used by BoundBy=),
i.e. this dependency is purely about propagating "edges" and not
"levels", i.e. it's about propagating specific events, instead of
continious states.

This is supposed to be useful for dependencies between .mount units and
their backing .device units. So far we either placed a BindsTo= or
Requires= dependency between them. The former gave a very clear binding
of the to units together, however was problematic if users establish
mounnts manually with different block device sources than our
configuration defines, as we there might come to the conclusion that the
backing device was absent and thus we need to umount again what the user
mounted. By combining Requires= with the new StopPropagatedFrom= (i.e.
the inverse PropagateStopTo=) we can get behaviour that matches BindsTo=
in every single atom but one: UNIT_ATOM_CANNOT_BE_ACTIVE_WITHOUT is
absent, and hence the level-triggered logic doesn't apply.

Replaces: #11340
2021-05-25 16:03:03 +02:00
Lennart Poettering
629b2a6f7b core: add a reverse dep for OnFailure=
Let's add an implicit reverse dep OnFailureOf=. This is exposed via the
bus to make things more debuggable: you can now ask systemd for which
units a specific unit is the failure handler.

OnFailure= was the only dependency type that had no inverse, this fixes
that.

Now that deps are a bit cheaper, it should be OK to add deps that only
serve debug purposes.
2021-05-25 16:03:03 +02:00
Lennart Poettering
39628fedac core: hide cgroup fields in unit_dump() for non-cgroup unit types
A bunch of properties in the main Unit strcture only make sense for
cgroup units. Let's hide them from unit types that have no relation to
cgroups.
2021-05-25 16:03:03 +02:00
Lennart Poettering
d219a2b07c core: convert Slice= into a proper dependency (and add a back dependency)
The slice a unit is assigned to is currently a UnitRef reference. Let's
turn it into a proper dependency, to simplify and clean up code a bit.
Now that new dep types are cheaper, deps should generally be preferable
over everything else, if the concept applies.

This brings one major benefit: we often have to iterate through all unit
a slice contains. So far we iterated through all Before= dependencies of
the slice unit to achieve that, filtering out unrelated units, and
taking benefit of the fact that slice units are implicitly ordered
Before= the units they contain. By making Slice= a proper dependency,
and having an accompanying SliceOf= dependency type, this is much
simpler and nicer as we can directly enumerate the units a slice
contains.

The forward dependency is actually called InSlice internally, since we
already used the UNIT_SLICE name as UnitType field. However, since we
don't intend to expose the dependency to users as dep anyway (we already
have the regular Slice D-Bus property for this) this shouldn't matter.
The SliceOf= implicit dependency type (the erverse of Slice=/InSlice=)
is exported over the bus, to make things a bit nicer to debug and
discoverable.
2021-05-25 16:03:01 +02:00
Lennart Poettering
12f64221b0 core: add UNIT_GET_SLICE() helper
In a later commit we intend to move the slice logic to use proper
dependencies instead of a "UnitRef" object. This preparatory commit
drops direct use of the slice UnitRef object for a static inline
function UNIT_GET_SLICE() that is both easier to grok, and allows us to
easily replace its internal implementation later on.
2021-05-25 16:02:00 +02:00
Lennart Poettering
8ddba3f266 test-engine: extend engine test
Let's verify that the dependency type to atom mapping is consistent.

Let's also verify that dependency merging works correctly.
2021-05-25 15:54:19 +02:00
Lennart Poettering
defe63b0f3 core: rebreak a few comments 2021-05-25 15:54:19 +02:00
Lennart Poettering
15ed3c3a18 core: split dependency types into atoms 2021-05-25 15:54:19 +02:00
Lennart Poettering
641d3761d4 hashmap: add helper to test if iterator is still at beginning 2021-05-25 15:47:09 +02:00
56 changed files with 2054 additions and 853 deletions

View File

@ -659,9 +659,9 @@
<varlistentry>
<term><option>tpm2-pcrs=</option></term>
<listitem><para>Takes a comma separated list of numeric TPM2 PCR (i.e. "Platform Configuration
Register") indexes to bind the TPM2 volume unlocking to. This option is only useful when TPM2
enrollment metadata is not available in the LUKS2 JSON token header already, the way
<listitem><para>Takes a <literal>+</literal> separated list of numeric TPM2 PCR (i.e. "Platform
Configuration Register") indexes to bind the TPM2 volume unlocking to. This option is only useful
when TPM2 enrollment metadata is not available in the LUKS2 JSON token header already, the way
<command>systemd-cryptenroll</command> writes it there. If not used (and no metadata in the LUKS2
JSON token header defines it), defaults to a list of a single entry: PCR 7. Assign an empty string to
encode a policy that binds the key to no PCRs, making the key accessible to local programs regardless

View File

@ -113,18 +113,18 @@
<varlistentry>
<term><command>terminate-session</command> <replaceable>ID</replaceable></term>
<listitem><para>Terminates a session. This kills all processes
of the session and deallocates all resources attached to the
session. </para></listitem>
<listitem><para>Terminates a session. This kills all processes of the session and deallocates all
resources attached to the session. If the argument is specified as empty string the session invoking
the command is terminated.</para></listitem>
</varlistentry>
<varlistentry>
<term><command>kill-session</command> <replaceable>ID</replaceable></term>
<listitem><para>Send a signal to one or more processes of the
session. Use <option>--kill-who=</option> to select which
process to kill. Use <option>--signal=</option> to select the
signal to send.</para></listitem>
<listitem><para>Send a signal to one or more processes of the session. Use
<option>--kill-who=</option> to select which process to kill. Use <option>--signal=</option> to
select the signal to send. If the argument is specified as empty string the signal is sent to the
session invoking the command.</para></listitem>
</varlistentry>
</variablelist></refsect2>
@ -184,17 +184,17 @@
<varlistentry>
<term><command>terminate-user</command> <replaceable>USER</replaceable></term>
<listitem><para>Terminates all sessions of a user. This kills
all processes of all sessions of the user and deallocates all
runtime resources attached to the user.</para></listitem>
<listitem><para>Terminates all sessions of a user. This kills all processes of all sessions of the
user and deallocates all runtime resources attached to the user. If the argument is specified as
empty string the sessions of the user invoking the command are terminated.</para></listitem>
</varlistentry>
<varlistentry>
<term><command>kill-user</command> <replaceable>USER</replaceable></term>
<listitem><para>Send a signal to all processes of a user. Use
<option>--signal=</option> to select the signal to send.
</para></listitem>
<listitem><para>Send a signal to all processes of a user. Use <option>--signal=</option> to select
the signal to send. If the argument is specified as empty string the signal is sent to the sessions
of the user invoking the command.</para></listitem>
</varlistentry>
</variablelist></refsect2>

View File

@ -1630,6 +1630,12 @@ node /org/freedesktop/systemd1/unit/avahi_2ddaemon_2eservice {
@org.freedesktop.DBus.Property.EmitsChangedSignal("const")
readonly as OnFailure = ['...', ...];
@org.freedesktop.DBus.Property.EmitsChangedSignal("const")
readonly as OnFailureOf = ['...', ...];
@org.freedesktop.DBus.Property.EmitsChangedSignal("const")
readonly as OnSuccess = ['...', ...];
@org.freedesktop.DBus.Property.EmitsChangedSignal("const")
readonly as OnSuccessOf = ['...', ...];
@org.freedesktop.DBus.Property.EmitsChangedSignal("const")
readonly as Triggers = ['...', ...];
@org.freedesktop.DBus.Property.EmitsChangedSignal("const")
readonly as TriggeredBy = ['...', ...];
@ -1638,8 +1644,14 @@ node /org/freedesktop/systemd1/unit/avahi_2ddaemon_2eservice {
@org.freedesktop.DBus.Property.EmitsChangedSignal("const")
readonly as ReloadPropagatedFrom = ['...', ...];
@org.freedesktop.DBus.Property.EmitsChangedSignal("const")
readonly as PropagatesStopTo = ['...', ...];
@org.freedesktop.DBus.Property.EmitsChangedSignal("const")
readonly as StopPropagatedFrom = ['...', ...];
@org.freedesktop.DBus.Property.EmitsChangedSignal("const")
readonly as JoinsNamespaceOf = ['...', ...];
@org.freedesktop.DBus.Property.EmitsChangedSignal("const")
readonly as SliceOf = ['...', ...];
@org.freedesktop.DBus.Property.EmitsChangedSignal("const")
readonly as RequiresMountsFor = ['...', ...];
@org.freedesktop.DBus.Property.EmitsChangedSignal("const")
readonly as Documentation = ['...', ...];
@ -1694,6 +1706,8 @@ node /org/freedesktop/systemd1/unit/avahi_2ddaemon_2eservice {
@org.freedesktop.DBus.Property.EmitsChangedSignal("const")
readonly b DefaultDependencies = ...;
@org.freedesktop.DBus.Property.EmitsChangedSignal("const")
readonly s OnSuccesJobMode = '...';
@org.freedesktop.DBus.Property.EmitsChangedSignal("const")
readonly s OnFailureJobMode = '...';
@org.freedesktop.DBus.Property.EmitsChangedSignal("const")
readonly b IgnoreOnIsolate = ...;
@ -1771,10 +1785,22 @@ node /org/freedesktop/systemd1/unit/avahi_2ddaemon_2eservice {
<!--property ConsistsOf is not documented!-->
<!--property OnFailureOf is not documented!-->
<!--property OnSuccess is not documented!-->
<!--property OnSuccessOf is not documented!-->
<!--property ReloadPropagatedFrom is not documented!-->
<!--property PropagatesStopTo is not documented!-->
<!--property StopPropagatedFrom is not documented!-->
<!--property JoinsNamespaceOf is not documented!-->
<!--property SliceOf is not documented!-->
<!--property FreezerState is not documented!-->
<!--property DropInPaths is not documented!-->
@ -1789,6 +1815,8 @@ node /org/freedesktop/systemd1/unit/avahi_2ddaemon_2eservice {
<!--property CanFreeze is not documented!-->
<!--property OnSuccesJobMode is not documented!-->
<!--property OnFailureJobMode is not documented!-->
<!--property JobRunningTimeoutUSec is not documented!-->
@ -1901,6 +1929,12 @@ node /org/freedesktop/systemd1/unit/avahi_2ddaemon_2eservice {
<variablelist class="dbus-property" generated="True" extra-ref="OnFailure"/>
<variablelist class="dbus-property" generated="True" extra-ref="OnFailureOf"/>
<variablelist class="dbus-property" generated="True" extra-ref="OnSuccess"/>
<variablelist class="dbus-property" generated="True" extra-ref="OnSuccessOf"/>
<variablelist class="dbus-property" generated="True" extra-ref="Triggers"/>
<variablelist class="dbus-property" generated="True" extra-ref="TriggeredBy"/>
@ -1909,8 +1943,14 @@ node /org/freedesktop/systemd1/unit/avahi_2ddaemon_2eservice {
<variablelist class="dbus-property" generated="True" extra-ref="ReloadPropagatedFrom"/>
<variablelist class="dbus-property" generated="True" extra-ref="PropagatesStopTo"/>
<variablelist class="dbus-property" generated="True" extra-ref="StopPropagatedFrom"/>
<variablelist class="dbus-property" generated="True" extra-ref="JoinsNamespaceOf"/>
<variablelist class="dbus-property" generated="True" extra-ref="SliceOf"/>
<variablelist class="dbus-property" generated="True" extra-ref="RequiresMountsFor"/>
<variablelist class="dbus-property" generated="True" extra-ref="Documentation"/>
@ -1979,6 +2019,8 @@ node /org/freedesktop/systemd1/unit/avahi_2ddaemon_2eservice {
<variablelist class="dbus-property" generated="True" extra-ref="DefaultDependencies"/>
<variablelist class="dbus-property" generated="True" extra-ref="OnSuccesJobMode"/>
<variablelist class="dbus-property" generated="True" extra-ref="OnFailureJobMode"/>
<variablelist class="dbus-property" generated="True" extra-ref="IgnoreOnIsolate"/>

View File

@ -176,11 +176,11 @@
<term><option>--tpm2-pcrs=</option><arg rep="repeat">PCR</arg></term>
<listitem><para>Configures the TPM2 PCRs (Platform Configuration Registers) to bind the enrollment
requested via <option>--tpm2-device=</option> to. Takes a comma separated list of numeric PCR indexes
in the range 0…23. If not used, defaults to PCR 7 only. If an empty string is specified, binds the
enrollment to no PCRs at all. PCRs allow binding the enrollment to specific software versions and
system state, so that the enrolled unlocking key is only accessible (may be "unsealed") if specific
trusted software and/or configuration is used.</para></listitem>
requested via <option>--tpm2-device=</option> to. Takes a <literal>+</literal> separated list of
numeric PCR indexes in the range 0…23. If not used, defaults to PCR 7 only. If an empty string is
specified, binds the enrollment to no PCRs at all. PCRs allow binding the enrollment to specific
software versions and system state, so that the enrolled unlocking key is only accessible (may be
"unsealed") if specific trusted software and/or configuration is used.</para></listitem>
<table>
<title>Well-known PCR Definitions</title>

View File

@ -708,6 +708,24 @@
</listitem>
</varlistentry>
<varlistentry>
<term><varname>Upholds=</varname></term>
<listitem><para>Configures dependencies similar to <varname>Wants=</varname>, but as long a this unit
is up, all units listed in <varname>Upholds=</varname> are started whenever found to be inactive or
failed, and no job is queued for them. While a <varname>Wants=</varname> dependency on another unit
has a one-time effect when this units started, a <varname>Upholds=</varname> dependency on it has a
continuous effect, constantly restarting the unit if necessary. This is an alternative to the
<varname>Restart=</varname> setting of service units, to ensure they are kept running whatever
happens.</para>
<para>When <varname>Upholds=b.service</varname> is used on <filename>a.service</filename>, this
dependency will show as <varname>UpheldBy=a.service</varname> in the property listing of
<filename>b.service</filename>. The <varname>UpheldBy=</varname> dependency cannot be specified
directly.</para>
</listitem>
</varlistentry>
<varlistentry>
<term><varname>Conflicts=</varname></term>
@ -776,24 +794,36 @@
<varlistentry>
<term><varname>OnFailure=</varname></term>
<listitem><para>A space-separated list of one or more units
that are activated when this unit enters the
<literal>failed</literal> state. A service unit using
<varname>Restart=</varname> enters the failed state only after
the start limits are reached.</para></listitem>
<listitem><para>A space-separated list of one or more units that are activated when this unit enters
the <literal>failed</literal> state. A service unit using <varname>Restart=</varname> enters the
failed state only after the start limits are reached.</para></listitem>
</varlistentry>
<varlistentry>
<term><varname>OnSuccess=</varname></term>
<listitem><para>A space-separated list of one or more units that are activated when this unit enters
the <literal>inactive</literal> state.</para></listitem>
</varlistentry>
<varlistentry>
<term><varname>PropagatesReloadTo=</varname></term>
<term><varname>ReloadPropagatedFrom=</varname></term>
<listitem><para>A space-separated list of one or more units
where reload requests on this unit will be propagated to, or
reload requests on the other unit will be propagated to this
unit, respectively. Issuing a reload request on a unit will
automatically also enqueue a reload request on all units that
the reload request shall be propagated to via these two
settings.</para></listitem>
<listitem><para>A space-separated list of one or more units to which reload requests from this unit
shall be propagated to, or units from which reload requests shall be propagated to this unit,
respectively. Issuing a reload request on a unit will automatically also enqueue reload requests on
all units that are linked to it using these two settings.</para></listitem>
</varlistentry>
<varlistentry>
<term><varname>PropagatesStopTo=</varname></term>
<term><varname>StopPropagatedFrom=</varname></term>
<listitem><para>A space-separated list of one or more units to which stop requests from this unit
shall be propagated to, or units from which stop requests shall be propagated to this unit,
respectively. Issuing a stop request on a unit will automatically also enqueue stop requests on all
units that are linked to it using these two settings.</para></listitem>
</varlistentry>
<varlistentry>

View File

@ -51,6 +51,7 @@ typedef struct {
#define _IDX_ITERATOR_FIRST (UINT_MAX - 1)
#define ITERATOR_FIRST ((Iterator) { .idx = _IDX_ITERATOR_FIRST, .next_key = NULL })
#define ITERATOR_IS_FIRST(i) ((i).idx == _IDX_ITERATOR_FIRST)
/* Macros for type checking */
#define PTR_COMPATIBLE_WITH_HASHMAP_BASE(h) \

View File

@ -265,23 +265,32 @@ static const char* const unit_dependency_table[_UNIT_DEPENDENCY_MAX] = {
[UNIT_WANTS] = "Wants",
[UNIT_BINDS_TO] = "BindsTo",
[UNIT_PART_OF] = "PartOf",
[UNIT_UPHOLDS] = "Upholds",
[UNIT_REQUIRED_BY] = "RequiredBy",
[UNIT_REQUISITE_OF] = "RequisiteOf",
[UNIT_WANTED_BY] = "WantedBy",
[UNIT_BOUND_BY] = "BoundBy",
[UNIT_UPHELD_BY] = "UpheldBy",
[UNIT_CONSISTS_OF] = "ConsistsOf",
[UNIT_CONFLICTS] = "Conflicts",
[UNIT_CONFLICTED_BY] = "ConflictedBy",
[UNIT_BEFORE] = "Before",
[UNIT_AFTER] = "After",
[UNIT_ON_SUCCESS] = "OnSuccess",
[UNIT_ON_SUCCESS_OF] = "OnSuccessOf",
[UNIT_ON_FAILURE] = "OnFailure",
[UNIT_ON_FAILURE_OF] = "OnFailureOf",
[UNIT_TRIGGERS] = "Triggers",
[UNIT_TRIGGERED_BY] = "TriggeredBy",
[UNIT_PROPAGATES_RELOAD_TO] = "PropagatesReloadTo",
[UNIT_RELOAD_PROPAGATED_FROM] = "ReloadPropagatedFrom",
[UNIT_PROPAGATES_STOP_TO] = "PropagatesStopTo",
[UNIT_STOP_PROPAGATED_FROM] = "StopPropagatedFrom",
[UNIT_JOINS_NAMESPACE_OF] = "JoinsNamespaceOf",
[UNIT_REFERENCES] = "References",
[UNIT_REFERENCED_BY] = "ReferencedBy",
[UNIT_IN_SLICE] = "InSlice",
[UNIT_SLICE_OF] = "SliceOf",
};
DEFINE_STRING_TABLE_LOOKUP(unit_dependency, UnitDependency);

View File

@ -211,6 +211,7 @@ typedef enum UnitDependency {
UNIT_WANTS,
UNIT_BINDS_TO,
UNIT_PART_OF,
UNIT_UPHOLDS,
/* Inverse of the above */
UNIT_REQUIRED_BY, /* inverse of 'requires' is 'required_by' */
@ -218,6 +219,7 @@ typedef enum UnitDependency {
UNIT_WANTED_BY, /* inverse of 'wants' */
UNIT_BOUND_BY, /* inverse of 'binds_to' */
UNIT_CONSISTS_OF, /* inverse of 'part_of' */
UNIT_UPHELD_BY, /* inverse of 'uphold' */
/* Negative dependencies */
UNIT_CONFLICTS, /* inverse of 'conflicts' is 'conflicted_by' */
@ -227,8 +229,11 @@ typedef enum UnitDependency {
UNIT_BEFORE, /* inverse of 'before' is 'after' and vice versa */
UNIT_AFTER,
/* On Failure */
/* OnSuccess= + OnFailure= */
UNIT_ON_SUCCESS,
UNIT_ON_SUCCESS_OF,
UNIT_ON_FAILURE,
UNIT_ON_FAILURE_OF,
/* Triggers (i.e. a socket triggers a service) */
UNIT_TRIGGERS,
@ -238,6 +243,10 @@ typedef enum UnitDependency {
UNIT_PROPAGATES_RELOAD_TO,
UNIT_RELOAD_PROPAGATED_FROM,
/* Propagate stops */
UNIT_PROPAGATES_STOP_TO,
UNIT_STOP_PROPAGATED_FROM,
/* Joins namespace of */
UNIT_JOINS_NAMESPACE_OF,
@ -245,6 +254,10 @@ typedef enum UnitDependency {
UNIT_REFERENCES, /* Inverse of 'references' is 'referenced_by' */
UNIT_REFERENCED_BY,
/* Slice= */
UNIT_IN_SLICE,
UNIT_SLICE_OF,
_UNIT_DEPENDENCY_MAX,
_UNIT_DEPENDENCY_INVALID = -EINVAL,
} UnitDependency;

View File

@ -422,7 +422,7 @@ static int bpf_firewall_prepare_access_maps(
assert(ret_ipv6_map_fd);
assert(ret_has_any);
for (p = u; p; p = UNIT_DEREF(p->slice)) {
for (p = u; p; p = UNIT_GET_SLICE(p)) {
CGroupContext *cc;
cc = unit_get_cgroup_context(p);
@ -463,7 +463,7 @@ static int bpf_firewall_prepare_access_maps(
return ipv6_map_fd;
}
for (p = u; p; p = UNIT_DEREF(p->slice)) {
for (p = u; p; p = UNIT_GET_SLICE(p)) {
CGroupContext *cc;
cc = unit_get_cgroup_context(p);

View File

@ -680,7 +680,7 @@ int cgroup_add_bpf_foreign_program(CGroupContext *c, uint32_t attach_type, const
if (c && c->entry##_set) \
return c->entry; \
\
while ((u = UNIT_DEREF(u->slice))) { \
while ((u = UNIT_GET_SLICE(u))) { \
c = unit_get_cgroup_context(u); \
if (c && c->default_##entry##_set) \
return c->default_##entry; \
@ -1549,7 +1549,7 @@ static bool unit_get_needs_bpf_firewall(Unit *u) {
return true;
/* If any parent slice has an IP access list defined, it applies too */
for (p = UNIT_DEREF(u->slice); p; p = UNIT_DEREF(p->slice)) {
for (p = UNIT_GET_SLICE(u); p; p = UNIT_GET_SLICE(p)) {
c = unit_get_cgroup_context(p);
if (!c)
return false;
@ -1700,12 +1700,10 @@ CGroupMask unit_get_members_mask(Unit *u) {
u->cgroup_members_mask = 0;
if (u->type == UNIT_SLICE) {
void *v;
Unit *member;
HASHMAP_FOREACH_KEY(v, member, u->dependencies[UNIT_BEFORE])
if (UNIT_DEREF(member->slice) == u)
u->cgroup_members_mask |= unit_get_subtree_mask(member); /* note that this calls ourselves again, for the children */
UNIT_FOREACH_DEPENDENCY(member, u, UNIT_ATOM_SLICE_OF)
u->cgroup_members_mask |= unit_get_subtree_mask(member); /* note that this calls ourselves again, for the children */
}
u->cgroup_members_mask_valid = true;
@ -1713,14 +1711,16 @@ CGroupMask unit_get_members_mask(Unit *u) {
}
CGroupMask unit_get_siblings_mask(Unit *u) {
Unit *slice;
assert(u);
/* Returns the mask of controllers all of the unit's siblings
* require, i.e. the members mask of the unit's parent slice
* if there is one. */
if (UNIT_ISSET(u->slice))
return unit_get_members_mask(UNIT_DEREF(u->slice));
slice = UNIT_GET_SLICE(u);
if (slice)
return unit_get_members_mask(slice);
return unit_get_subtree_mask(u); /* we are the top-level slice */
}
@ -1737,6 +1737,7 @@ static CGroupMask unit_get_disable_mask(Unit *u) {
CGroupMask unit_get_ancestor_disable_mask(Unit *u) {
CGroupMask mask;
Unit *slice;
assert(u);
mask = unit_get_disable_mask(u);
@ -1744,8 +1745,9 @@ CGroupMask unit_get_ancestor_disable_mask(Unit *u) {
/* Returns the mask of controllers which are marked as forcibly
* disabled in any ancestor unit or the unit in question. */
if (UNIT_ISSET(u->slice))
mask |= unit_get_ancestor_disable_mask(UNIT_DEREF(u->slice));
slice = UNIT_GET_SLICE(u);
if (slice)
mask |= unit_get_ancestor_disable_mask(slice);
return mask;
}
@ -1787,13 +1789,16 @@ CGroupMask unit_get_enable_mask(Unit *u) {
}
void unit_invalidate_cgroup_members_masks(Unit *u) {
Unit *slice;
assert(u);
/* Recurse invalidate the member masks cache all the way up the tree */
u->cgroup_members_mask_valid = false;
if (UNIT_ISSET(u->slice))
unit_invalidate_cgroup_members_masks(UNIT_DEREF(u->slice));
slice = UNIT_GET_SLICE(u);
if (slice)
unit_invalidate_cgroup_members_masks(slice);
}
const char *unit_get_realized_cgroup_path(Unit *u, CGroupMask mask) {
@ -1807,7 +1812,7 @@ const char *unit_get_realized_cgroup_path(Unit *u, CGroupMask mask) {
FLAGS_SET(u->cgroup_realized_mask, mask))
return u->cgroup_path;
u = UNIT_DEREF(u->slice);
u = UNIT_GET_SLICE(u);
}
return NULL;
@ -1821,7 +1826,8 @@ static const char *migrate_callback(CGroupMask mask, void *userdata) {
}
char *unit_default_cgroup_path(const Unit *u) {
_cleanup_free_ char *escaped = NULL, *slice = NULL;
_cleanup_free_ char *escaped = NULL, *slice_path = NULL;
Unit *slice;
int r;
assert(u);
@ -1829,8 +1835,9 @@ char *unit_default_cgroup_path(const Unit *u) {
if (unit_has_name(u, SPECIAL_ROOT_SLICE))
return strdup(u->manager->cgroup_root);
if (UNIT_ISSET(u->slice) && !unit_has_name(UNIT_DEREF(u->slice), SPECIAL_ROOT_SLICE)) {
r = cg_slice_to_path(UNIT_DEREF(u->slice)->id, &slice);
slice = UNIT_GET_SLICE(u);
if (slice && !unit_has_name(slice, SPECIAL_ROOT_SLICE)) {
r = cg_slice_to_path(slice->id, &slice_path);
if (r < 0)
return NULL;
}
@ -1839,7 +1846,7 @@ char *unit_default_cgroup_path(const Unit *u) {
if (!escaped)
return NULL;
return path_join(empty_to_root(u->manager->cgroup_root), slice, escaped);
return path_join(empty_to_root(u->manager->cgroup_root), slice_path, escaped);
}
int unit_set_cgroup_path(Unit *u, const char *path) {
@ -2313,14 +2320,16 @@ static void unit_remove_from_cgroup_realize_queue(Unit *u) {
* hierarchy downwards to the unit in question. */
static int unit_realize_cgroup_now_enable(Unit *u, ManagerState state) {
CGroupMask target_mask, enable_mask, new_target_mask, new_enable_mask;
Unit *slice;
int r;
assert(u);
/* First go deal with this unit's parent, or we won't be able to enable
* any new controllers at this layer. */
if (UNIT_ISSET(u->slice)) {
r = unit_realize_cgroup_now_enable(UNIT_DEREF(u->slice), state);
slice = UNIT_GET_SLICE(u);
if (slice) {
r = unit_realize_cgroup_now_enable(slice, state);
if (r < 0)
return r;
}
@ -2343,36 +2352,29 @@ static int unit_realize_cgroup_now_enable(Unit *u, ManagerState state) {
* hierarchy upwards to the unit in question. */
static int unit_realize_cgroup_now_disable(Unit *u, ManagerState state) {
Unit *m;
void *v;
assert(u);
if (u->type != UNIT_SLICE)
return 0;
HASHMAP_FOREACH_KEY(v, m, u->dependencies[UNIT_BEFORE]) {
UNIT_FOREACH_DEPENDENCY(m, u, UNIT_ATOM_SLICE_OF) {
CGroupMask target_mask, enable_mask, new_target_mask, new_enable_mask;
int r;
if (UNIT_DEREF(m->slice) != u)
continue;
/* The cgroup for this unit might not actually be fully
* realised yet, in which case it isn't holding any controllers
* open anyway. */
/* The cgroup for this unit might not actually be fully realised yet, in which case it isn't
* holding any controllers open anyway. */
if (!m->cgroup_realized)
continue;
/* We must disable those below us first in order to release the
* controller. */
/* We must disable those below us first in order to release the controller. */
if (m->type == UNIT_SLICE)
(void) unit_realize_cgroup_now_disable(m, state);
target_mask = unit_get_target_mask(m);
enable_mask = unit_get_enable_mask(m);
/* We can only disable in this direction, don't try to enable
* anything. */
/* We can only disable in this direction, don't try to enable anything. */
if (unit_has_mask_disables_realized(m, target_mask, enable_mask))
continue;
@ -2433,6 +2435,7 @@ static int unit_realize_cgroup_now_disable(Unit *u, ManagerState state) {
* Returns 0 on success and < 0 on failure. */
static int unit_realize_cgroup_now(Unit *u, ManagerState state) {
CGroupMask target_mask, enable_mask;
Unit *slice;
int r;
assert(u);
@ -2451,8 +2454,9 @@ static int unit_realize_cgroup_now(Unit *u, ManagerState state) {
return r;
/* Enable controllers above us, if there are any */
if (UNIT_ISSET(u->slice)) {
r = unit_realize_cgroup_now_enable(UNIT_DEREF(u->slice), state);
slice = UNIT_GET_SLICE(u);
if (slice) {
r = unit_realize_cgroup_now_enable(slice, state);
if (r < 0)
return r;
}
@ -2515,15 +2519,11 @@ void unit_add_family_to_cgroup_realize_queue(Unit *u) {
do {
Unit *m;
void *v;
/* Children of u likely changed when we're called */
u->cgroup_members_mask_valid = false;
HASHMAP_FOREACH_KEY(v, m, u->dependencies[UNIT_BEFORE]) {
/* Skip units that have a dependency on the slice but aren't actually in it. */
if (UNIT_DEREF(m->slice) != u)
continue;
UNIT_FOREACH_DEPENDENCY(m, u, UNIT_ATOM_SLICE_OF) {
/* No point in doing cgroup application for units without active processes. */
if (UNIT_IS_INACTIVE_OR_FAILED(unit_active_state(m)))
@ -2534,8 +2534,8 @@ void unit_add_family_to_cgroup_realize_queue(Unit *u) {
if (!m->cgroup_realized)
continue;
/* If the unit doesn't need any new controllers and has current ones realized, it
* doesn't need any changes. */
/* If the unit doesn't need any new controllers and has current ones
* realized, it doesn't need any changes. */
if (unit_has_mask_realized(m,
unit_get_target_mask(m),
unit_get_enable_mask(m)))
@ -2546,10 +2546,14 @@ void unit_add_family_to_cgroup_realize_queue(Unit *u) {
/* Parent comes after children */
unit_add_to_cgroup_realize_queue(u);
} while ((u = UNIT_DEREF(u->slice)));
u = UNIT_GET_SLICE(u);
} while (u);
}
int unit_realize_cgroup(Unit *u) {
Unit *slice;
assert(u);
if (!UNIT_HAS_CGROUP_CONTEXT(u))
@ -2564,8 +2568,9 @@ int unit_realize_cgroup(Unit *u) {
* This call will defer work on the siblings and derealized ancestors to the next event loop
* iteration and synchronously creates the parent cgroups (unit_realize_cgroup_now). */
if (UNIT_ISSET(u->slice))
unit_add_family_to_cgroup_realize_queue(UNIT_DEREF(u->slice));
slice = UNIT_GET_SLICE(u);
if (slice)
unit_add_family_to_cgroup_realize_queue(slice);
/* And realize this one now (and apply the values) */
return unit_realize_cgroup_now(u, manager_state(u->manager));
@ -3782,11 +3787,9 @@ void unit_invalidate_cgroup_bpf(Unit *u) {
* list of our children includes our own. */
if (u->type == UNIT_SLICE) {
Unit *member;
void *v;
HASHMAP_FOREACH_KEY(v, member, u->dependencies[UNIT_BEFORE])
if (UNIT_DEREF(member->slice) == u)
unit_invalidate_cgroup_bpf(member);
UNIT_FOREACH_DEPENDENCY(member, u, UNIT_ATOM_SLICE_OF)
unit_invalidate_cgroup_bpf(member);
}
}

View File

@ -155,21 +155,27 @@ static int property_get_dependencies(
void *userdata,
sd_bus_error *error) {
Hashmap **h = userdata;
Unit *u;
Unit *u = userdata, *other;
UnitDependency d;
Hashmap *deps;
void *v;
int r;
assert(bus);
assert(reply);
assert(h);
assert(u);
d = unit_dependency_from_string(property);
assert_se(d >= 0);
deps = unit_get_dependencies(u, d);
r = sd_bus_message_open_container(reply, 'a', "s");
if (r < 0)
return r;
HASHMAP_FOREACH_KEY(v, u, *h) {
r = sd_bus_message_append(reply, "s", u->id);
HASHMAP_FOREACH_KEY(v, other, deps) {
r = sd_bus_message_append(reply, "s", other->id);
if (r < 0)
return r;
}
@ -844,26 +850,32 @@ const sd_bus_vtable bus_unit_vtable[] = {
SD_BUS_PROPERTY("Id", "s", NULL, offsetof(Unit, id), SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("Names", "as", property_get_names, 0, SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("Following", "s", property_get_following, 0, 0),
SD_BUS_PROPERTY("Requires", "as", property_get_dependencies, offsetof(Unit, dependencies[UNIT_REQUIRES]), SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("Requisite", "as", property_get_dependencies, offsetof(Unit, dependencies[UNIT_REQUISITE]), SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("Wants", "as", property_get_dependencies, offsetof(Unit, dependencies[UNIT_WANTS]), SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("BindsTo", "as", property_get_dependencies, offsetof(Unit, dependencies[UNIT_BINDS_TO]), SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("PartOf", "as", property_get_dependencies, offsetof(Unit, dependencies[UNIT_PART_OF]), SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("RequiredBy", "as", property_get_dependencies, offsetof(Unit, dependencies[UNIT_REQUIRED_BY]), SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("RequisiteOf", "as", property_get_dependencies, offsetof(Unit, dependencies[UNIT_REQUISITE_OF]), SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("WantedBy", "as", property_get_dependencies, offsetof(Unit, dependencies[UNIT_WANTED_BY]), SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("BoundBy", "as", property_get_dependencies, offsetof(Unit, dependencies[UNIT_BOUND_BY]), SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("ConsistsOf", "as", property_get_dependencies, offsetof(Unit, dependencies[UNIT_CONSISTS_OF]), SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("Conflicts", "as", property_get_dependencies, offsetof(Unit, dependencies[UNIT_CONFLICTS]), SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("ConflictedBy", "as", property_get_dependencies, offsetof(Unit, dependencies[UNIT_CONFLICTED_BY]), SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("Before", "as", property_get_dependencies, offsetof(Unit, dependencies[UNIT_BEFORE]), SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("After", "as", property_get_dependencies, offsetof(Unit, dependencies[UNIT_AFTER]), SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("OnFailure", "as", property_get_dependencies, offsetof(Unit, dependencies[UNIT_ON_FAILURE]), SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("Triggers", "as", property_get_dependencies, offsetof(Unit, dependencies[UNIT_TRIGGERS]), SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("TriggeredBy", "as", property_get_dependencies, offsetof(Unit, dependencies[UNIT_TRIGGERED_BY]), SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("PropagatesReloadTo", "as", property_get_dependencies, offsetof(Unit, dependencies[UNIT_PROPAGATES_RELOAD_TO]), SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("ReloadPropagatedFrom", "as", property_get_dependencies, offsetof(Unit, dependencies[UNIT_RELOAD_PROPAGATED_FROM]), SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("JoinsNamespaceOf", "as", property_get_dependencies, offsetof(Unit, dependencies[UNIT_JOINS_NAMESPACE_OF]), SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("Requires", "as", property_get_dependencies, 0, SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("Requisite", "as", property_get_dependencies, 0, SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("Wants", "as", property_get_dependencies, 0, SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("BindsTo", "as", property_get_dependencies, 0, SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("PartOf", "as", property_get_dependencies, 0, SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("RequiredBy", "as", property_get_dependencies, 0, SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("RequisiteOf", "as", property_get_dependencies, 0, SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("WantedBy", "as", property_get_dependencies, 0, SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("BoundBy", "as", property_get_dependencies, 0, SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("ConsistsOf", "as", property_get_dependencies, 0, SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("Conflicts", "as", property_get_dependencies, 0, SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("ConflictedBy", "as", property_get_dependencies, 0, SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("Before", "as", property_get_dependencies, 0, SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("After", "as", property_get_dependencies, 0, SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("OnFailure", "as", property_get_dependencies, 0, SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("OnFailureOf", "as", property_get_dependencies, 0, SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("OnSuccess", "as", property_get_dependencies, 0, SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("OnSuccessOf", "as", property_get_dependencies, 0, SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("Triggers", "as", property_get_dependencies, 0, SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("TriggeredBy", "as", property_get_dependencies, 0, SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("PropagatesReloadTo", "as", property_get_dependencies, 0, SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("ReloadPropagatedFrom", "as", property_get_dependencies, 0, SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("PropagatesStopTo", "as", property_get_dependencies, 0, SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("StopPropagatedFrom", "as", property_get_dependencies, 0, SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("JoinsNamespaceOf", "as", property_get_dependencies, 0, SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("SliceOf", "as", property_get_dependencies, 0, SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("RequiresMountsFor", "as", property_get_requires_mounts_for, offsetof(Unit, requires_mounts_for), SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("Documentation", "as", NULL, offsetof(Unit, documentation), SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("Description", "s", property_get_description, 0, SD_BUS_VTABLE_PROPERTY_CONST),
@ -893,6 +905,7 @@ const sd_bus_vtable bus_unit_vtable[] = {
SD_BUS_PROPERTY("RefuseManualStop", "b", bus_property_get_bool, offsetof(Unit, refuse_manual_stop), SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("AllowIsolate", "b", bus_property_get_bool, offsetof(Unit, allow_isolate), SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("DefaultDependencies", "b", bus_property_get_bool, offsetof(Unit, default_dependencies), SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("OnSuccesJobMode", "s", property_get_job_mode, offsetof(Unit, on_success_job_mode), SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("OnFailureJobMode", "s", property_get_job_mode, offsetof(Unit, on_failure_job_mode), SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("IgnoreOnIsolate", "b", bus_property_get_bool, offsetof(Unit, ignore_on_isolate), SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_PROPERTY("NeedDaemonReload", "b", property_get_need_daemon_reload, 0, SD_BUS_VTABLE_PROPERTY_CONST),
@ -2116,6 +2129,9 @@ static int bus_unit_set_transient_property(
if (streq(name, "DefaultDependencies"))
return bus_set_transient_bool(u, name, &u->default_dependencies, message, flags, error);
if (streq(name, "OnSuccessJobMode"))
return bus_set_transient_job_mode(u, name, &u->on_success_job_mode, message, flags, error);
if (streq(name, "OnFailureJobMode"))
return bus_set_transient_job_mode(u, name, &u->on_failure_job_mode, message, flags, error);
@ -2231,7 +2247,7 @@ static int bus_unit_set_transient_property(
return sd_bus_error_setf(error, SD_BUS_ERROR_INVALID_ARGS, "Unit name '%s' is not a slice", s);
if (!UNIT_WRITE_FLAGS_NOOP(flags)) {
r = unit_set_slice(u, slice);
r = unit_set_slice(u, slice, UNIT_DEPENDENCY_FILE);
if (r < 0)
return r;
@ -2288,12 +2304,16 @@ static int bus_unit_set_transient_property(
UNIT_WANTS,
UNIT_BINDS_TO,
UNIT_PART_OF,
UNIT_UPHOLDS,
UNIT_CONFLICTS,
UNIT_BEFORE,
UNIT_AFTER,
UNIT_ON_SUCCESS,
UNIT_ON_FAILURE,
UNIT_PROPAGATES_RELOAD_TO,
UNIT_RELOAD_PROPAGATED_FROM,
UNIT_PROPAGATES_STOP_TO,
UNIT_STOP_PROPAGATED_FROM,
UNIT_JOINS_NAMESPACE_OF))
return sd_bus_error_setf(error, SD_BUS_ERROR_INVALID_ARGS, "Dependency type %s may not be created transiently.", unit_dependency_to_string(d));

View File

@ -468,7 +468,7 @@ static void device_upgrade_mount_deps(Unit *u) {
/* Let's upgrade Requires= to BindsTo= on us. (Used when SYSTEMD_MOUNT_DEVICE_BOUND is set) */
HASHMAP_FOREACH_KEY(v, other, u->dependencies[UNIT_REQUIRED_BY]) {
HASHMAP_FOREACH_KEY(v, other, unit_get_dependencies(u, UNIT_REQUIRED_BY)) {
if (other->type != UNIT_MOUNT)
continue;

View File

@ -455,7 +455,6 @@ int job_type_merge_and_collapse(JobType *a, JobType b, Unit *u) {
static bool job_is_runnable(Job *j) {
Unit *other;
void *v;
assert(j);
assert(j->installed);
@ -477,16 +476,16 @@ static bool job_is_runnable(Job *j) {
if (j->type == JOB_NOP)
return true;
HASHMAP_FOREACH_KEY(v, other, j->unit->dependencies[UNIT_AFTER])
if (other->job && job_compare(j, other->job, UNIT_AFTER) > 0) {
UNIT_FOREACH_DEPENDENCY(other, j->unit, UNIT_ATOM_AFTER)
if (other->job && job_compare(j, other->job, UNIT_ATOM_AFTER) > 0) {
log_unit_debug(j->unit,
"starting held back, waiting for: %s",
other->id);
return false;
}
HASHMAP_FOREACH_KEY(v, other, j->unit->dependencies[UNIT_BEFORE])
if (other->job && job_compare(j, other->job, UNIT_BEFORE) > 0) {
UNIT_FOREACH_DEPENDENCY(other, j->unit, UNIT_ATOM_BEFORE)
if (other->job && job_compare(j, other->job, UNIT_ATOM_BEFORE) > 0) {
log_unit_debug(j->unit,
"stopping held back, waiting for: %s",
other->id);
@ -951,13 +950,12 @@ static void job_emit_done_status_message(Unit *u, uint32_t job_id, JobType t, Jo
job_print_done_status_message(u, t, result);
}
static void job_fail_dependencies(Unit *u, UnitDependency d) {
static void job_fail_dependencies(Unit *u, UnitDependencyAtom match_atom) {
Unit *other;
void *v;
assert(u);
HASHMAP_FOREACH_KEY(v, other, u->dependencies[d]) {
UNIT_FOREACH_DEPENDENCY(other, u, match_atom) {
Job *j = other->job;
if (!j)
@ -970,10 +968,8 @@ static void job_fail_dependencies(Unit *u, UnitDependency d) {
}
int job_finish_and_invalidate(Job *j, JobResult result, bool recursive, bool already) {
Unit *u;
Unit *other;
Unit *u, *other;
JobType t;
void *v;
assert(j);
assert(j->installed);
@ -1012,43 +1008,36 @@ int job_finish_and_invalidate(Job *j, JobResult result, bool recursive, bool alr
/* Fail depending jobs on failure */
if (result != JOB_DONE && recursive) {
if (IN_SET(t, JOB_START, JOB_VERIFY_ACTIVE)) {
job_fail_dependencies(u, UNIT_REQUIRED_BY);
job_fail_dependencies(u, UNIT_REQUISITE_OF);
job_fail_dependencies(u, UNIT_BOUND_BY);
} else if (t == JOB_STOP)
job_fail_dependencies(u, UNIT_CONFLICTED_BY);
if (IN_SET(t, JOB_START, JOB_VERIFY_ACTIVE))
job_fail_dependencies(u, UNIT_ATOM_PROPAGATE_START_FAILURE);
else if (t == JOB_STOP)
job_fail_dependencies(u, UNIT_ATOM_PROPAGATE_STOP_FAILURE);
}
/* A special check to make sure we take down anything RequisiteOf if we
* aren't active. This is when the verify-active job merges with a
* satisfying job type, and then loses it's invalidation effect, as the
* result there is JOB_DONE for the start job we merged into, while we
* should be failing the depending job if the said unit isn't in fact
* active. Oneshots are an example of this, where going directly from
* activating to inactive is success.
/* A special check to make sure we take down anything RequisiteOf= if we aren't active. This is when
* the verify-active job merges with a satisfying job type, and then loses it's invalidation effect,
* as the result there is JOB_DONE for the start job we merged into, while we should be failing the
* depending job if the said unit isn't in fact active. Oneshots are an example of this, where going
* directly from activating to inactive is success.
*
* This happens when you use ConditionXYZ= in a unit too, since in that
* case the job completes with the JOB_DONE result, but the unit never
* really becomes active. Note that such a case still involves merging:
* This happens when you use ConditionXYZ= in a unit too, since in that case the job completes with
* the JOB_DONE result, but the unit never really becomes active. Note that such a case still
* involves merging:
*
* A start job waits for something else, and a verify-active comes in
* and merges in the installed job. Then, later, when it becomes
* runnable, it finishes with JOB_DONE result as execution on conditions
* not being met is skipped, breaking our dependency semantics.
* A start job waits for something else, and a verify-active comes in and merges in the installed
* job. Then, later, when it becomes runnable, it finishes with JOB_DONE result as execution on
* conditions not being met is skipped, breaking our dependency semantics.
*
* Also, depending on if start job waits or not, the merging may or may
* not happen (the verify-active job may trigger after it finishes), so
* you get undeterministic results without this check.
* Also, depending on if start job waits or not, the merging may or may not happen (the verify-active
* job may trigger after it finishes), so you get undeterministic results without this check.
*/
if (result == JOB_DONE && recursive && !UNIT_IS_ACTIVE_OR_RELOADING(unit_active_state(u))) {
if (IN_SET(t, JOB_START, JOB_RELOAD))
job_fail_dependencies(u, UNIT_REQUISITE_OF);
}
/* Trigger OnFailure dependencies that are not generated by
* the unit itself. We don't treat JOB_CANCELED as failure in
* this context. And JOB_FAILURE is already handled by the
* unit itself. */
if (result == JOB_DONE && recursive &&
IN_SET(t, JOB_START, JOB_RELOAD) &&
!UNIT_IS_ACTIVE_OR_RELOADING(unit_active_state(u)))
job_fail_dependencies(u, UNIT_ATOM_PROPAGATE_INACTIVE_START_AS_FAILURE);
/* Trigger OnFailure= dependencies that are not generated by the unit itself. We don't treat
* JOB_CANCELED as failure in this context. And JOB_FAILURE is already handled by the unit itself. */
if (IN_SET(result, JOB_TIMEOUT, JOB_DEPENDENCY)) {
log_unit_struct(u, LOG_NOTICE,
"JOB_TYPE=%s", job_type_to_string(t),
@ -1058,19 +1047,19 @@ int job_finish_and_invalidate(Job *j, JobResult result, bool recursive, bool alr
job_type_to_string(t),
job_result_to_string(result)));
unit_start_on_failure(u);
unit_start_on_failure(u, "OnFailure=", UNIT_ATOM_ON_FAILURE, u->on_failure_job_mode);
}
unit_trigger_notify(u);
finish:
/* Try to start the next jobs that can be started */
HASHMAP_FOREACH_KEY(v, other, u->dependencies[UNIT_AFTER])
UNIT_FOREACH_DEPENDENCY(other, u, UNIT_ATOM_AFTER)
if (other->job) {
job_add_to_run_queue(other->job);
job_add_to_gc_queue(other->job);
}
HASHMAP_FOREACH_KEY(v, other, u->dependencies[UNIT_BEFORE])
UNIT_FOREACH_DEPENDENCY(other, u, UNIT_ATOM_BEFORE)
if (other->job) {
job_add_to_run_queue(other->job);
job_add_to_gc_queue(other->job);
@ -1420,7 +1409,6 @@ int job_get_timeout(Job *j, usec_t *timeout) {
bool job_may_gc(Job *j) {
Unit *other;
void *v;
assert(j);
@ -1449,12 +1437,12 @@ bool job_may_gc(Job *j) {
return false;
/* The logic is inverse to job_is_runnable, we cannot GC as long as we block any job. */
HASHMAP_FOREACH_KEY(v, other, j->unit->dependencies[UNIT_BEFORE])
if (other->job && job_compare(j, other->job, UNIT_BEFORE) < 0)
UNIT_FOREACH_DEPENDENCY(other, j->unit, UNIT_ATOM_BEFORE)
if (other->job && job_compare(j, other->job, UNIT_ATOM_BEFORE) < 0)
return false;
HASHMAP_FOREACH_KEY(v, other, j->unit->dependencies[UNIT_AFTER])
if (other->job && job_compare(j, other->job, UNIT_AFTER) < 0)
UNIT_FOREACH_DEPENDENCY(other, j->unit, UNIT_ATOM_AFTER)
if (other->job && job_compare(j, other->job, UNIT_ATOM_AFTER) < 0)
return false;
return true;
@ -1500,7 +1488,6 @@ int job_get_before(Job *j, Job*** ret) {
_cleanup_free_ Job** list = NULL;
Unit *other = NULL;
size_t n = 0;
void *v;
/* Returns a list of all pending jobs that need to finish before this job may be started. */
@ -1512,10 +1499,10 @@ int job_get_before(Job *j, Job*** ret) {
return 0;
}
HASHMAP_FOREACH_KEY(v, other, j->unit->dependencies[UNIT_AFTER]) {
UNIT_FOREACH_DEPENDENCY(other, j->unit, UNIT_ATOM_AFTER) {
if (!other->job)
continue;
if (job_compare(j, other->job, UNIT_AFTER) <= 0)
if (job_compare(j, other->job, UNIT_ATOM_AFTER) <= 0)
continue;
if (!GREEDY_REALLOC(list, n+1))
@ -1523,10 +1510,10 @@ int job_get_before(Job *j, Job*** ret) {
list[n++] = other->job;
}
HASHMAP_FOREACH_KEY(v, other, j->unit->dependencies[UNIT_BEFORE]) {
UNIT_FOREACH_DEPENDENCY(other, j->unit, UNIT_ATOM_BEFORE) {
if (!other->job)
continue;
if (job_compare(j, other->job, UNIT_BEFORE) <= 0)
if (job_compare(j, other->job, UNIT_ATOM_BEFORE) <= 0)
continue;
if (!GREEDY_REALLOC(list, n+1))
@ -1545,21 +1532,20 @@ int job_get_after(Job *j, Job*** ret) {
_cleanup_free_ Job** list = NULL;
Unit *other = NULL;
size_t n = 0;
void *v;
assert(j);
assert(ret);
/* Returns a list of all pending jobs that are waiting for this job to finish. */
HASHMAP_FOREACH_KEY(v, other, j->unit->dependencies[UNIT_BEFORE]) {
UNIT_FOREACH_DEPENDENCY(other, j->unit, UNIT_ATOM_BEFORE) {
if (!other->job)
continue;
if (other->job->ignore_order)
continue;
if (job_compare(j, other->job, UNIT_BEFORE) >= 0)
if (job_compare(j, other->job, UNIT_ATOM_BEFORE) >= 0)
continue;
if (!GREEDY_REALLOC(list, n+1))
@ -1567,14 +1553,14 @@ int job_get_after(Job *j, Job*** ret) {
list[n++] = other->job;
}
HASHMAP_FOREACH_KEY(v, other, j->unit->dependencies[UNIT_AFTER]) {
UNIT_FOREACH_DEPENDENCY(other, j->unit, UNIT_ATOM_AFTER) {
if (!other->job)
continue;
if (other->job->ignore_order)
continue;
if (job_compare(j, other->job, UNIT_AFTER) >= 0)
if (job_compare(j, other->job, UNIT_ATOM_AFTER) >= 0)
continue;
if (!GREEDY_REALLOC(list, n+1))
@ -1666,13 +1652,14 @@ const char* job_type_to_access_method(JobType t) {
* stop a + start b 1st step stop a, 2nd step start b
* stop a + stop b 1st step stop b, 2nd step stop a
*
* This has the side effect that restarts are properly
* synchronized too.
* This has the side effect that restarts are properly synchronized too.
*/
int job_compare(Job *a, Job *b, UnitDependency assume_dep) {
int job_compare(Job *a, Job *b, UnitDependencyAtom assume_dep) {
assert(a);
assert(b);
assert(a->type < _JOB_TYPE_MAX_IN_TRANSACTION);
assert(b->type < _JOB_TYPE_MAX_IN_TRANSACTION);
assert(IN_SET(assume_dep, UNIT_AFTER, UNIT_BEFORE));
assert(IN_SET(assume_dep, UNIT_ATOM_AFTER, UNIT_ATOM_BEFORE));
/* Trivial cases first */
if (a->type == JOB_NOP || b->type == JOB_NOP)
@ -1681,12 +1668,11 @@ int job_compare(Job *a, Job *b, UnitDependency assume_dep) {
if (a->ignore_order || b->ignore_order)
return 0;
if (assume_dep == UNIT_AFTER)
return -job_compare(b, a, UNIT_BEFORE);
if (assume_dep == UNIT_ATOM_AFTER)
return -job_compare(b, a, UNIT_ATOM_BEFORE);
/* Let's make it simple, JOB_STOP goes always first (in case both ua and ub stop,
* then ub's stop goes first anyway).
* JOB_RESTART is JOB_STOP in disguise (before it is patched to JOB_START). */
/* Let's make it simple, JOB_STOP goes always first (in case both ua and ub stop, then ub's stop goes
* first anyway). JOB_RESTART is JOB_STOP in disguise (before it is patched to JOB_START). */
if (IN_SET(b->type, JOB_STOP, JOB_RESTART))
return 1;
else

View File

@ -6,7 +6,9 @@
#include "sd-event.h"
#include "list.h"
#include "unit-dependency-atom.h"
#include "unit-name.h"
#include "unit.h"
typedef struct Job Job;
typedef struct JobDependency JobDependency;
@ -240,4 +242,4 @@ JobResult job_result_from_string(const char *s) _pure_;
const char* job_type_to_access_method(JobType t);
int job_compare(Job *a, Job *b, UnitDependency assume_dep);
int job_compare(Job *a, Job *b, UnitDependencyAtom assume_dep);

View File

@ -261,14 +261,18 @@ Unit.Requisite, config_parse_unit_deps,
Unit.Wants, config_parse_unit_deps, UNIT_WANTS, 0
Unit.BindsTo, config_parse_unit_deps, UNIT_BINDS_TO, 0
Unit.BindTo, config_parse_unit_deps, UNIT_BINDS_TO, 0
Unit.Upholds, config_parse_unit_deps, UNIT_UPHOLDS, 0
Unit.Conflicts, config_parse_unit_deps, UNIT_CONFLICTS, 0
Unit.Before, config_parse_unit_deps, UNIT_BEFORE, 0
Unit.After, config_parse_unit_deps, UNIT_AFTER, 0
Unit.OnSuccess, config_parse_unit_deps, UNIT_ON_SUCCESS, 0
Unit.OnFailure, config_parse_unit_deps, UNIT_ON_FAILURE, 0
Unit.PropagatesReloadTo, config_parse_unit_deps, UNIT_PROPAGATES_RELOAD_TO, 0
Unit.PropagateReloadTo, config_parse_unit_deps, UNIT_PROPAGATES_RELOAD_TO, 0
Unit.ReloadPropagatedFrom, config_parse_unit_deps, UNIT_RELOAD_PROPAGATED_FROM, 0
Unit.PropagateReloadFrom, config_parse_unit_deps, UNIT_RELOAD_PROPAGATED_FROM, 0
Unit.PropagatesStopTo, config_parse_unit_deps, UNIT_PROPAGATES_STOP_TO, 0
Unit.StopPropagatedFrom, config_parse_unit_deps, UNIT_STOP_PROPAGATED_FROM, 0
Unit.PartOf, config_parse_unit_deps, UNIT_PART_OF, 0
Unit.JoinsNamespaceOf, config_parse_unit_deps, UNIT_JOINS_NAMESPACE_OF, 0
Unit.RequiresOverridable, config_parse_obsolete_unit_deps, UNIT_REQUIRES, 0
@ -279,6 +283,7 @@ Unit.RefuseManualStart, config_parse_bool,
Unit.RefuseManualStop, config_parse_bool, 0, offsetof(Unit, refuse_manual_stop)
Unit.AllowIsolate, config_parse_bool, 0, offsetof(Unit, allow_isolate)
Unit.DefaultDependencies, config_parse_bool, 0, offsetof(Unit, default_dependencies)
Unit.OnSuccessJobMode, config_parse_job_mode, 0, offsetof(Unit, on_success_job_mode)
Unit.OnFailureJobMode, config_parse_job_mode, 0, offsetof(Unit, on_failure_job_mode)
{# The following is a legacy alias name for compatibility #}
Unit.OnFailureIsolate, config_parse_job_mode_isolate, 0, offsetof(Unit, on_failure_job_mode)

View File

@ -789,7 +789,7 @@ int config_parse_exec(
return ignore ? 0 : -ENOEXEC;
}
if (!path_is_absolute(path) && !filename_is_valid(path)) {
if (!(path_is_absolute(path) ? path_is_valid(path) : filename_is_valid(path))) {
log_syntax(unit, ignore ? LOG_WARNING : LOG_ERR, filename, line, 0,
"Neither a valid executable name nor an absolute path%s: %s",
ignore ? ", ignoring" : "", path);
@ -3574,7 +3574,7 @@ int config_parse_unit_slice(
return 0;
}
r = unit_set_slice(u, slice);
r = unit_set_slice(u, slice, UNIT_DEPENDENCY_FILE);
if (r < 0) {
log_syntax(unit, LOG_WARNING, filename, line, r, "Failed to assign slice %s to unit %s, ignoring: %m", slice->id, u->id);
return 0;

View File

@ -1141,12 +1141,11 @@ enum {
static void unit_gc_mark_good(Unit *u, unsigned gc_marker) {
Unit *other;
void *v;
u->gc_marker = gc_marker + GC_OFFSET_GOOD;
/* Recursively mark referenced units as GOOD as well */
HASHMAP_FOREACH_KEY(v, other, u->dependencies[UNIT_REFERENCES])
UNIT_FOREACH_DEPENDENCY(other, u, UNIT_ATOM_REFERENCES)
if (other->gc_marker == gc_marker + GC_OFFSET_UNSURE)
unit_gc_mark_good(other, gc_marker);
}
@ -1154,7 +1153,6 @@ static void unit_gc_mark_good(Unit *u, unsigned gc_marker) {
static void unit_gc_sweep(Unit *u, unsigned gc_marker) {
Unit *other;
bool is_bad;
void *v;
assert(u);
@ -1172,7 +1170,7 @@ static void unit_gc_sweep(Unit *u, unsigned gc_marker) {
is_bad = true;
HASHMAP_FOREACH_KEY(v, other, u->dependencies[UNIT_REFERENCED_BY]) {
UNIT_FOREACH_DEPENDENCY(other, u, UNIT_ATOM_REFERENCED_BY) {
unit_gc_sweep(other, gc_marker);
if (other->gc_marker == gc_marker + GC_OFFSET_GOOD)
@ -1297,7 +1295,7 @@ static unsigned manager_dispatch_stop_when_unneeded_queue(Manager *m) {
/* If stopping a unit fails continuously we might enter a stop loop here, hence stop acting on the
* service being unnecessary after a while. */
if (!ratelimit_below(&u->auto_stop_ratelimit)) {
if (!ratelimit_below(&u->auto_start_stop_ratelimit)) {
log_unit_warning(u, "Unit not needed anymore, but not stopping since we tried this too often recently.");
continue;
}
@ -1311,6 +1309,82 @@ static unsigned manager_dispatch_stop_when_unneeded_queue(Manager *m) {
return n;
}
static unsigned manager_dispatch_start_when_upheld_queue(Manager *m) {
unsigned n = 0;
Unit *u;
int r;
assert(m);
while ((u = m->start_when_upheld_queue)) {
_cleanup_(sd_bus_error_free) sd_bus_error error = SD_BUS_ERROR_NULL;
Unit *culprit = NULL;
assert(u->in_start_when_upheld_queue);
LIST_REMOVE(start_when_upheld_queue, m->start_when_upheld_queue, u);
u->in_start_when_upheld_queue = false;
n++;
if (!unit_is_upheld_by_active(u, &culprit))
continue;
log_unit_debug(u, "Unit is started because upheld by active unit %s.", culprit->id);
/* If stopping a unit fails continuously we might enter a stop loop here, hence stop acting on the
* service being unnecessary after a while. */
if (!ratelimit_below(&u->auto_start_stop_ratelimit)) {
log_unit_warning(u, "Unit needs to be started because active unit %s upholds it, but not starting since we tried this too often recently.", culprit->id);
continue;
}
r = manager_add_job(u->manager, JOB_START, u, JOB_FAIL, NULL, &error, NULL);
if (r < 0)
log_unit_warning_errno(u, r, "Failed to enqueue start job, ignoring: %s", bus_error_message(&error, r));
}
return n;
}
static unsigned manager_dispatch_stop_when_bound_queue(Manager *m) {
unsigned n = 0;
Unit *u;
int r;
assert(m);
while ((u = m->stop_when_bound_queue)) {
_cleanup_(sd_bus_error_free) sd_bus_error error = SD_BUS_ERROR_NULL;
Unit *culprit = NULL;
assert(u->in_stop_when_bound_queue);
LIST_REMOVE(stop_when_bound_queue, m->stop_when_bound_queue, u);
u->in_stop_when_bound_queue = false;
n++;
if (!unit_is_bound_by_inactive(u, &culprit))
continue;
log_unit_debug(u, "Unit is stopped because bound to inactive unit %s.", culprit->id);
/* If stopping a unit fails continuously we might enter a stop loop here, hence stop acting on the
* service being unnecessary after a while. */
if (!ratelimit_below(&u->auto_start_stop_ratelimit)) {
log_unit_warning(u, "Unit needs to be stopped because it is bound to inactive unit %s it, but not stopping since we tried this too often recently.", culprit->id);
continue;
}
r = manager_add_job(u->manager, JOB_STOP, u, JOB_REPLACE, NULL, &error, NULL);
if (r < 0)
log_unit_warning_errno(u, r, "Failed to enqueue stop job, ignoring: %s", bus_error_message(&error, r));
}
return n;
}
static void manager_clear_jobs_and_units(Manager *m) {
Unit *u;
@ -1329,6 +1403,8 @@ static void manager_clear_jobs_and_units(Manager *m) {
assert(!m->gc_unit_queue);
assert(!m->gc_job_queue);
assert(!m->stop_when_unneeded_queue);
assert(!m->start_when_upheld_queue);
assert(!m->stop_when_bound_queue);
assert(hashmap_isempty(m->jobs));
assert(hashmap_isempty(m->units));
@ -1870,30 +1946,28 @@ static int manager_dispatch_target_deps_queue(Manager *m) {
Unit *u;
int r = 0;
static const UnitDependency deps[] = {
UNIT_REQUIRED_BY,
UNIT_REQUISITE_OF,
UNIT_WANTED_BY,
UNIT_BOUND_BY
};
assert(m);
while ((u = m->target_deps_queue)) {
_cleanup_free_ Unit **targets = NULL;
int n_targets;
assert(u->in_target_deps_queue);
LIST_REMOVE(target_deps_queue, u->manager->target_deps_queue, u);
u->in_target_deps_queue = false;
for (size_t k = 0; k < ELEMENTSOF(deps); k++) {
Unit *target;
void *v;
/* Take an "atomic" snapshot of dependencies here, as the call below will likely modify the
* dependencies, and we can't have it that hash tables we iterate through are modified while
* we are iterating through them. */
n_targets = unit_get_dependency_array(u, UNIT_ATOM_DEFAULT_TARGET_DEPENDENCIES, &targets);
if (n_targets < 0)
return n_targets;
HASHMAP_FOREACH_KEY(v, target, u->dependencies[deps[k]]) {
r = unit_add_default_target_dependency(u, target);
if (r < 0)
return r;
}
for (int i = 0; i < n_targets; i++) {
r = unit_add_default_target_dependency(u, targets[i]);
if (r < 0)
return r;
}
}
@ -2958,6 +3032,12 @@ int manager_loop(Manager *m) {
if (manager_dispatch_cgroup_realize_queue(m) > 0)
continue;
if (manager_dispatch_start_when_upheld_queue(m) > 0)
continue;
if (manager_dispatch_stop_when_bound_queue(m) > 0)
continue;
if (manager_dispatch_stop_when_unneeded_queue(m) > 0)
continue;

View File

@ -187,6 +187,12 @@ struct Manager {
/* Units that might be subject to StopWhenUnneeded= clean-up */
LIST_HEAD(Unit, stop_when_unneeded_queue);
/* Units which are upheld by another other which we might need to act on */
LIST_HEAD(Unit, start_when_upheld_queue);
/* Units that have BindsTo= another unit, and might need to be shutdown because the bound unit is not active. */
LIST_HEAD(Unit, stop_when_bound_queue);
sd_event *event;
/* This maps PIDs we care about to units that are interested in. We allow multiple units to he interested in

View File

@ -117,6 +117,8 @@ libcore_sources = '''
timer.h
transaction.c
transaction.h
unit-dependency-atom.c
unit-dependency-atom.h
unit-printf.c
unit-printf.h
unit-serialize.c

View File

@ -66,6 +66,24 @@ static bool MOUNT_STATE_WITH_PROCESS(MountState state) {
MOUNT_CLEANING);
}
static MountParameters* get_mount_parameters_fragment(Mount *m) {
assert(m);
if (m->from_fragment)
return &m->parameters_fragment;
return NULL;
}
static MountParameters* get_mount_parameters(Mount *m) {
assert(m);
if (m->from_proc_self_mountinfo)
return &m->parameters_proc_self_mountinfo;
return get_mount_parameters_fragment(m);
}
static bool mount_is_automount(const MountParameters *p) {
assert(p);
@ -116,16 +134,33 @@ static bool mount_is_bind(const MountParameters *p) {
return false;
}
static bool mount_is_bound_to_device(const Mount *m) {
static bool mount_is_bound_to_device(Mount *m) {
const MountParameters *p;
if (m->from_fragment)
return true;
assert(m);
/* Determines whether to place a Requires= or BindsTo= dependency on the backing device unit. We do
* this by checking for the x-systemd.device-bound mount option. Iff it is set we use BindsTo=,
* otherwise Requires=. But note that we might combine the latter with StopPropagatedFrom=, see
* below. */
p = get_mount_parameters(m);
if (!p)
return false;
p = &m->parameters_proc_self_mountinfo;
return fstab_test_option(p->options, "x-systemd.device-bound\0");
}
static bool mount_propagate_stop(Mount *m) {
assert(m);
if (mount_is_bound_to_device(m)) /* If we are using BindsTo= the stop propagation is implicit, no need to bother */
return false;
return m->from_fragment; /* let's propagate stop whenever this is an explicitly configured unit,
* otherwise let's not bother. */
}
static bool mount_needs_quota(const MountParameters *p) {
assert(p);
@ -234,24 +269,6 @@ static void mount_done(Unit *u) {
m->timer_event_source = sd_event_source_unref(m->timer_event_source);
}
static MountParameters* get_mount_parameters_fragment(Mount *m) {
assert(m);
if (m->from_fragment)
return &m->parameters_fragment;
return NULL;
}
static MountParameters* get_mount_parameters(Mount *m) {
assert(m);
if (m->from_proc_self_mountinfo)
return &m->parameters_proc_self_mountinfo;
return get_mount_parameters_fragment(m);
}
static int update_parameters_proc_self_mountinfo(
Mount *m,
const char *what,
@ -367,8 +384,9 @@ static int mount_add_device_dependencies(Mount *m) {
return 0;
/* Mount units from /proc/self/mountinfo are not bound to devices by default since they're subject to
* races when devices are unplugged. But the user can still force this dep with an appropriate option
* (or udev property) so the mount units are automatically stopped when the device disappears
* races when mounts are established by other tools with different backing devices than what we
* maintain. The user can still force this to be a BindsTo= dependency with an appropriate option (or
* udev property) so the mount units are automatically stopped when the device disappears
* suddenly. */
dep = mount_is_bound_to_device(m) ? UNIT_BINDS_TO : UNIT_REQUIRES;
@ -378,6 +396,11 @@ static int mount_add_device_dependencies(Mount *m) {
r = unit_add_node_dependency(UNIT(m), p->what, dep, mask);
if (r < 0)
return r;
if (mount_propagate_stop(m)) {
r = unit_add_node_dependency(UNIT(m), p->what, UNIT_STOP_PROPAGATED_FROM, mask);
if (r < 0)
return r;
}
return unit_add_blockdev_dependency(UNIT(m), p->what, mask);
}

View File

@ -36,10 +36,10 @@ static int path_dispatch_io(sd_event_source *source, int fd, uint32_t revents, v
int path_spec_watch(PathSpec *s, sd_event_io_handler_t handler) {
static const int flags_table[_PATH_TYPE_MAX] = {
[PATH_EXISTS] = IN_DELETE_SELF|IN_MOVE_SELF|IN_ATTRIB,
[PATH_EXISTS_GLOB] = IN_DELETE_SELF|IN_MOVE_SELF|IN_ATTRIB,
[PATH_CHANGED] = IN_DELETE_SELF|IN_MOVE_SELF|IN_ATTRIB|IN_CLOSE_WRITE|IN_CREATE|IN_DELETE|IN_MOVED_FROM|IN_MOVED_TO,
[PATH_MODIFIED] = IN_DELETE_SELF|IN_MOVE_SELF|IN_ATTRIB|IN_CLOSE_WRITE|IN_CREATE|IN_DELETE|IN_MOVED_FROM|IN_MOVED_TO|IN_MODIFY,
[PATH_EXISTS] = IN_DELETE_SELF|IN_MOVE_SELF|IN_ATTRIB,
[PATH_EXISTS_GLOB] = IN_DELETE_SELF|IN_MOVE_SELF|IN_ATTRIB,
[PATH_CHANGED] = IN_DELETE_SELF|IN_MOVE_SELF|IN_ATTRIB|IN_CLOSE_WRITE|IN_CREATE|IN_DELETE|IN_MOVED_FROM|IN_MOVED_TO,
[PATH_MODIFIED] = IN_DELETE_SELF|IN_MOVE_SELF|IN_ATTRIB|IN_CLOSE_WRITE|IN_CREATE|IN_DELETE|IN_MOVED_FROM|IN_MOVED_TO|IN_MODIFY,
[PATH_DIRECTORY_NOT_EMPTY] = IN_DELETE_SELF|IN_MOVE_SELF|IN_ATTRIB|IN_CREATE|IN_MOVED_TO,
};
@ -55,13 +55,15 @@ int path_spec_watch(PathSpec *s, sd_event_io_handler_t handler) {
s->inotify_fd = inotify_init1(IN_NONBLOCK|IN_CLOEXEC);
if (s->inotify_fd < 0) {
r = -errno;
r = log_error_errno(errno, "Failed to allocate inotify fd: %m");
goto fail;
}
r = sd_event_add_io(s->unit->manager->event, &s->event_source, s->inotify_fd, EPOLLIN, handler, s);
if (r < 0)
if (r < 0) {
log_error_errno(r, "Failed to add inotify fd to event loop: %m");
goto fail;
}
(void) sd_event_source_set_description(s->event_source, "path");
@ -69,9 +71,9 @@ int path_spec_watch(PathSpec *s, sd_event_io_handler_t handler) {
assert(!strstr(s->path, "//"));
for (slash = strchr(s->path, '/'); ; slash = strchr(slash+1, '/')) {
char *cut = NULL;
int flags;
char tmp;
bool incomplete = false;
int flags, wd = -1;
char tmp, *cut;
if (slash) {
cut = slash + (slash == s->path);
@ -79,28 +81,50 @@ int path_spec_watch(PathSpec *s, sd_event_io_handler_t handler) {
*cut = '\0';
flags = IN_MOVE_SELF | IN_DELETE_SELF | IN_ATTRIB | IN_CREATE | IN_MOVED_TO;
} else
} else {
cut = NULL;
flags = flags_table[s->type];
r = inotify_add_watch(s->inotify_fd, s->path, flags);
if (r < 0) {
if (IN_SET(errno, EACCES, ENOENT)) {
if (cut)
*cut = tmp;
break;
}
/* This second call to inotify_add_watch() should fail like the previous
* one and is done for logging the error in a comprehensive way. */
r = inotify_add_watch_and_warn(s->inotify_fd, s->path, flags);
if (r < 0) {
if (cut)
*cut = tmp;
goto fail;
}
/* Hmm, we succeeded in adding the watch this time... let's continue. */
}
/* If this is a symlink watch both the symlink inode and where it points to. If the inode is
* not a symlink both calls will install the same watch, which is redundant and doesn't
* hurt. */
for (int follow_symlink = 0; follow_symlink < 2; follow_symlink ++) {
uint32_t f = flags;
SET_FLAG(f, IN_DONT_FOLLOW, !follow_symlink);
wd = inotify_add_watch(s->inotify_fd, s->path, f);
if (wd < 0) {
if (IN_SET(errno, EACCES, ENOENT)) {
incomplete = true; /* This is an expected error, let's accept this
* quietly: we have an incomplete watch for
* now. */
break;
}
/* This second call to inotify_add_watch() should fail like the previous one
* and is done for logging the error in a comprehensive way. */
wd = inotify_add_watch_and_warn(s->inotify_fd, s->path, f);
if (wd < 0) {
if (cut)
*cut = tmp;
r = wd;
goto fail;
}
/* Hmm, we succeeded in adding the watch this time... let's continue. */
}
}
if (incomplete) {
if (cut)
*cut = tmp;
break;
}
exists = true;
/* Path exists, we don't need to watch parent too closely. */
@ -122,7 +146,7 @@ int path_spec_watch(PathSpec *s, sd_event_io_handler_t handler) {
oldslash = slash;
else {
/* whole path has been iterated over */
s->primary_wd = r;
s->primary_wd = wd;
break;
}
}
@ -151,7 +175,8 @@ int path_spec_fd_event(PathSpec *s, uint32_t revents) {
union inotify_event_buffer buffer;
struct inotify_event *e;
ssize_t l;
int r = 0;
assert(s);
if (revents != EPOLLIN)
return log_error_errno(SYNTHETIC_ERRNO(EINVAL),
@ -165,13 +190,12 @@ int path_spec_fd_event(PathSpec *s, uint32_t revents) {
return log_error_errno(errno, "Failed to read inotify event: %m");
}
FOREACH_INOTIFY_EVENT(e, buffer, l) {
if (IN_SET(s->type, PATH_CHANGED, PATH_MODIFIED) &&
s->primary_wd == e->wd)
r = 1;
}
if (IN_SET(s->type, PATH_CHANGED, PATH_MODIFIED))
FOREACH_INOTIFY_EVENT(e, buffer, l)
if (s->primary_wd == e->wd)
return 1;
return r;
return 0;
}
static bool path_spec_check_good(PathSpec *s, bool initial, bool from_trigger_notify) {

View File

@ -1245,12 +1245,11 @@ static int service_collect_fds(
rn_socket_fds = 1;
} else {
void *v;
Unit *u;
/* Pass all our configured sockets for singleton services */
HASHMAP_FOREACH_KEY(v, u, UNIT(s)->dependencies[UNIT_TRIGGERED_BY]) {
UNIT_FOREACH_DEPENDENCY(u, UNIT(s), UNIT_ATOM_TRIGGERED_BY) {
_cleanup_free_ int *cfds = NULL;
Socket *sock;
int cn_fds;
@ -2937,7 +2936,7 @@ static int service_deserialize_item(Unit *u, const char *key, const char *value,
r = extract_first_word(&value, &fdn, NULL, EXTRACT_CUNESCAPE | EXTRACT_UNQUOTE);
if (r <= 0) {
log_unit_debug_errno(u, r, "Failed to parse fd-store-fd value \"%s\": %m", value);
log_unit_debug(u, "Failed to parse fd-store-fd value: %s", value);
return 0;
}

View File

@ -47,25 +47,20 @@ static void slice_set_state(Slice *t, SliceState state) {
}
static int slice_add_parent_slice(Slice *s) {
Unit *u = UNIT(s), *parent;
Unit *u = UNIT(s);
_cleanup_free_ char *a = NULL;
int r;
assert(s);
if (UNIT_ISSET(u->slice))
if (UNIT_GET_SLICE(u))
return 0;
r = slice_build_parent_slice(u->id, &a);
if (r <= 0) /* 0 means root slice */
return r;
r = manager_load_unit(u->manager, a, NULL, NULL, &parent);
if (r < 0)
return r;
unit_ref_set(&u->slice, u, parent);
return 0;
return unit_add_dependency_by_name(u, UNIT_IN_SLICE, a, true, UNIT_DEPENDENCY_IMPLICIT);
}
static int slice_add_default_dependencies(Slice *s) {
@ -101,7 +96,7 @@ static int slice_verify(Slice *s) {
if (r < 0)
return log_unit_error_errno(UNIT(s), r, "Failed to determine parent slice: %m");
if (parent ? !unit_has_name(UNIT_DEREF(UNIT(s)->slice), parent) : UNIT_ISSET(UNIT(s)->slice))
if (parent ? !unit_has_name(UNIT_GET_SLICE(UNIT(s)), parent) : !!UNIT_GET_SLICE(UNIT(s)))
return log_unit_error_errno(UNIT(s), SYNTHETIC_ERRNO(ENOEXEC), "Located outside of parent slice. Refusing.");
return 0;
@ -346,15 +341,11 @@ static void slice_enumerate_perpetual(Manager *m) {
static bool slice_freezer_action_supported_by_children(Unit *s) {
Unit *member;
void *v;
int r;
assert(s);
HASHMAP_FOREACH_KEY(v, member, s->dependencies[UNIT_BEFORE]) {
int r;
if (UNIT_DEREF(member->slice) != s)
continue;
UNIT_FOREACH_DEPENDENCY(member, s, UNIT_ATOM_SLICE_OF) {
if (member->type == UNIT_SLICE) {
r = slice_freezer_action_supported_by_children(member);
@ -371,7 +362,6 @@ static bool slice_freezer_action_supported_by_children(Unit *s) {
static int slice_freezer_action(Unit *s, FreezerAction action) {
Unit *member;
void *v;
int r;
assert(s);
@ -382,15 +372,11 @@ static int slice_freezer_action(Unit *s, FreezerAction action) {
return 0;
}
HASHMAP_FOREACH_KEY(v, member, s->dependencies[UNIT_BEFORE]) {
if (UNIT_DEREF(member->slice) != s)
continue;
UNIT_FOREACH_DEPENDENCY(member, s, UNIT_ATOM_SLICE_OF) {
if (action == FREEZER_FREEZE)
r = UNIT_VTABLE(member)->freeze(member);
else
r = UNIT_VTABLE(member)->thaw(member);
if (r < 0)
return r;
}

View File

@ -2340,11 +2340,9 @@ static void socket_enter_running(Socket *s, int cfd_in) {
if (cfd < 0) {
bool pending = false;
Unit *other;
void *v;
/* If there's already a start pending don't bother to
* do anything */
HASHMAP_FOREACH_KEY(v, other, UNIT(s)->dependencies[UNIT_TRIGGERS])
/* If there's already a start pending don't bother to do anything */
UNIT_FOREACH_DEPENDENCY(other, UNIT(s), UNIT_ATOM_TRIGGERS)
if (unit_active_or_pending(other)) {
pending = true;
break;

View File

@ -35,34 +35,31 @@ static void target_set_state(Target *t, TargetState state) {
}
static int target_add_default_dependencies(Target *t) {
static const UnitDependency deps[] = {
UNIT_REQUIRES,
UNIT_REQUISITE,
UNIT_WANTS,
UNIT_BINDS_TO,
UNIT_PART_OF
};
int r;
_cleanup_free_ Unit **others = NULL;
int r, n_others;
assert(t);
if (!UNIT(t)->default_dependencies)
return 0;
/* Imply ordering for requirement dependencies on target units. Note that when the user created a contradicting
* ordering manually we won't add anything in here to make sure we don't create a loop. */
/* Imply ordering for requirement dependencies on target units. Note that when the user created a
* contradicting ordering manually we won't add anything in here to make sure we don't create a
* loop.
*
* Note that quite likely iterating through these dependencies will add new dependencies, which
* conflicts with the hashmap-based iteration logic. Hence, instead of iterating through the
* dependencies and acting on them as we go, first take an "atomic snapshot" of sorts and iterate
* through that. */
for (size_t k = 0; k < ELEMENTSOF(deps); k++) {
Unit *other;
void *v;
n_others = unit_get_dependency_array(UNIT(t), UNIT_ATOM_ADD_DEFAULT_TARGET_DEPENDENCY_QUEUE, &others);
if (n_others < 0)
return n_others;
HASHMAP_FOREACH_KEY(v, other, UNIT(t)->dependencies[deps[k]]) {
r = unit_add_default_target_dependency(other, UNIT(t));
if (r < 0)
return r;
}
for (int i = 0; i < n_others; i++) {
r = unit_add_default_target_dependency(others[i], UNIT(t));
if (r < 0)
return r;
}
if (unit_has_name(UNIT(t), SPECIAL_SHUTDOWN_TARGET))

View File

@ -347,21 +347,20 @@ static char* merge_unit_ids(const char* unit_log_field, char **pairs) {
}
static int transaction_verify_order_one(Transaction *tr, Job *j, Job *from, unsigned generation, sd_bus_error *e) {
Unit *u;
void *v;
int r;
static const UnitDependency directions[] = {
UNIT_BEFORE,
UNIT_AFTER,
static const UnitDependencyAtom directions[] = {
UNIT_ATOM_BEFORE,
UNIT_ATOM_AFTER,
};
size_t d;
int r;
assert(tr);
assert(j);
assert(!j->transaction_prev);
/* Does a recursive sweep through the ordering graph, looking
* for a cycle. If we find a cycle we try to break it. */
/* Does a recursive sweep through the ordering graph, looking for a cycle. If we find a cycle we try
* to break it. */
/* Have we seen this before? */
if (j->generation == generation) {
@ -369,18 +368,14 @@ static int transaction_verify_order_one(Transaction *tr, Job *j, Job *from, unsi
_cleanup_free_ char **array = NULL, *unit_ids = NULL;
char **unit_id, **job_type;
/* If the marker is NULL we have been here already and
* decided the job was loop-free from here. Hence
* shortcut things and return right-away. */
/* If the marker is NULL we have been here already and decided the job was loop-free from
* here. Hence shortcut things and return right-away. */
if (!j->marker)
return 0;
/* So, the marker is not NULL and we already have been here. We have
* a cycle. Let's try to break it. We go backwards in our path and
* try to find a suitable job to remove. We use the marker to find
* our way back, since smart how we are we stored our way back in
* there. */
/* So, the marker is not NULL and we already have been here. We have a cycle. Let's try to
* break it. We go backwards in our path and try to find a suitable job to remove. We use the
* marker to find our way back, since smart how we are we stored our way back in there. */
for (k = from; k; k = ((k->generation == generation && k->marker != k) ? k->marker : NULL)) {
/* For logging below */
@ -449,16 +444,17 @@ static int transaction_verify_order_one(Transaction *tr, Job *j, Job *from, unsi
* the graph over 'before' edges in the actual job execution order. We traverse over both unit
* ordering dependencies and we test with job_compare() whether it is the 'before' edge in the job
* execution ordering. */
for (d = 0; d < ELEMENTSOF(directions); d++) {
HASHMAP_FOREACH_KEY(v, u, j->unit->dependencies[directions[d]]) {
for (size_t d = 0; d < ELEMENTSOF(directions); d++) {
Unit *u;
UNIT_FOREACH_DEPENDENCY(u, j->unit, directions[d]) {
Job *o;
/* Is there a job for this unit? */
o = hashmap_get(tr->jobs, u);
if (!o) {
/* Ok, there is no job for this in the
* transaction, but maybe there is already one
* running? */
/* Ok, there is no job for this in the transaction, but maybe there is
* already one running? */
o = u->job;
if (!o)
continue;
@ -879,13 +875,12 @@ static void transaction_unlink_job(Transaction *tr, Job *j, bool delete_dependen
void transaction_add_propagate_reload_jobs(Transaction *tr, Unit *unit, Job *by, bool ignore_order, sd_bus_error *e) {
JobType nt;
Unit *dep;
void *v;
int r;
assert(tr);
assert(unit);
HASHMAP_FOREACH_KEY(v, dep, unit->dependencies[UNIT_PROPAGATES_RELOAD_TO]) {
UNIT_FOREACH_DEPENDENCY(dep, unit, UNIT_ATOM_PROPAGATES_RELOAD_TO) {
nt = job_type_collapse(JOB_TRY_RELOAD, dep);
if (nt == JOB_NOP)
continue;
@ -914,7 +909,6 @@ int transaction_add_job_and_dependencies(
bool is_new;
Unit *dep;
Job *ret;
void *v;
int r;
assert(tr);
@ -1006,7 +1000,7 @@ int transaction_add_job_and_dependencies(
/* Finally, recursively add in all dependencies. */
if (IN_SET(type, JOB_START, JOB_RESTART)) {
HASHMAP_FOREACH_KEY(v, dep, ret->unit->dependencies[UNIT_REQUIRES]) {
UNIT_FOREACH_DEPENDENCY(dep, ret->unit, UNIT_ATOM_PULL_IN_START) {
r = transaction_add_job_and_dependencies(tr, JOB_START, dep, ret, true, false, false, ignore_order, e);
if (r < 0) {
if (r != -EBADR) /* job type not applicable */
@ -1016,17 +1010,7 @@ int transaction_add_job_and_dependencies(
}
}
HASHMAP_FOREACH_KEY(v, dep, ret->unit->dependencies[UNIT_BINDS_TO]) {
r = transaction_add_job_and_dependencies(tr, JOB_START, dep, ret, true, false, false, ignore_order, e);
if (r < 0) {
if (r != -EBADR) /* job type not applicable */
goto fail;
sd_bus_error_free(e);
}
}
HASHMAP_FOREACH_KEY(v, dep, ret->unit->dependencies[UNIT_WANTS]) {
UNIT_FOREACH_DEPENDENCY(dep, ret->unit, UNIT_ATOM_PULL_IN_START_IGNORED) {
r = transaction_add_job_and_dependencies(tr, JOB_START, dep, ret, false, false, false, ignore_order, e);
if (r < 0) {
/* unit masked, job type not applicable and unit not found are not considered as errors. */
@ -1038,7 +1022,7 @@ int transaction_add_job_and_dependencies(
}
}
HASHMAP_FOREACH_KEY(v, dep, ret->unit->dependencies[UNIT_REQUISITE]) {
UNIT_FOREACH_DEPENDENCY(dep, ret->unit, UNIT_ATOM_PULL_IN_VERIFY) {
r = transaction_add_job_and_dependencies(tr, JOB_VERIFY_ACTIVE, dep, ret, true, false, false, ignore_order, e);
if (r < 0) {
if (r != -EBADR) /* job type not applicable */
@ -1048,7 +1032,7 @@ int transaction_add_job_and_dependencies(
}
}
HASHMAP_FOREACH_KEY(v, dep, ret->unit->dependencies[UNIT_CONFLICTS]) {
UNIT_FOREACH_DEPENDENCY(dep, ret->unit, UNIT_ATOM_PULL_IN_STOP) {
r = transaction_add_job_and_dependencies(tr, JOB_STOP, dep, ret, true, true, false, ignore_order, e);
if (r < 0) {
if (r != -EBADR) /* job type not applicable */
@ -1058,7 +1042,7 @@ int transaction_add_job_and_dependencies(
}
}
HASHMAP_FOREACH_KEY(v, dep, ret->unit->dependencies[UNIT_CONFLICTED_BY]) {
UNIT_FOREACH_DEPENDENCY(dep, ret->unit, UNIT_ATOM_PULL_IN_STOP_IGNORED) {
r = transaction_add_job_and_dependencies(tr, JOB_STOP, dep, ret, false, false, false, ignore_order, e);
if (r < 0) {
log_unit_warning(dep,
@ -1067,41 +1051,37 @@ int transaction_add_job_and_dependencies(
sd_bus_error_free(e);
}
}
}
if (IN_SET(type, JOB_STOP, JOB_RESTART)) {
static const UnitDependency propagate_deps[] = {
UNIT_REQUIRED_BY,
UNIT_REQUISITE_OF,
UNIT_BOUND_BY,
UNIT_CONSISTS_OF,
};
UnitDependencyAtom atom;
JobType ptype;
unsigned j;
/* We propagate STOP as STOP, but RESTART only
* as TRY_RESTART, in order not to start
/* We propagate STOP as STOP, but RESTART only as TRY_RESTART, in order not to start
* dependencies that are not around. */
ptype = type == JOB_RESTART ? JOB_TRY_RESTART : type;
if (type == JOB_RESTART) {
atom = UNIT_ATOM_PROPAGATE_RESTART;
ptype = JOB_TRY_RESTART;
} else {
ptype = JOB_STOP;
atom = UNIT_ATOM_PROPAGATE_STOP;
}
for (j = 0; j < ELEMENTSOF(propagate_deps); j++)
HASHMAP_FOREACH_KEY(v, dep, ret->unit->dependencies[propagate_deps[j]]) {
JobType nt;
UNIT_FOREACH_DEPENDENCY(dep, ret->unit, atom) {
JobType nt;
nt = job_type_collapse(ptype, dep);
if (nt == JOB_NOP)
continue;
nt = job_type_collapse(ptype, dep);
if (nt == JOB_NOP)
continue;
r = transaction_add_job_and_dependencies(tr, nt, dep, ret, true, false, false, ignore_order, e);
if (r < 0) {
if (r != -EBADR) /* job type not applicable */
goto fail;
r = transaction_add_job_and_dependencies(tr, nt, dep, ret, true, false, false, ignore_order, e);
if (r < 0) {
if (r != -EBADR) /* job type not applicable */
goto fail;
sd_bus_error_free(e);
}
sd_bus_error_free(e);
}
}
}
if (type == JOB_RELOAD)
@ -1150,14 +1130,14 @@ int transaction_add_isolate_jobs(Transaction *tr, Manager *m) {
}
int transaction_add_triggering_jobs(Transaction *tr, Unit *u) {
void *v;
Unit *trigger;
int r;
assert(tr);
assert(u);
HASHMAP_FOREACH_KEY(v, trigger, u->dependencies[UNIT_TRIGGERED_BY]) {
UNIT_FOREACH_DEPENDENCY(trigger, u, UNIT_ATOM_TRIGGERED_BY) {
/* No need to stop inactive jobs */
if (UNIT_IS_INACTIVE_OR_FAILED(unit_active_state(trigger)) && !trigger->job)
continue;

View File

@ -0,0 +1,240 @@
/* SPDX-License-Identifier: LGPL-2.1-or-later */
#include "unit-dependency-atom.h"
static const UnitDependencyAtom atom_map[_UNIT_DEPENDENCY_MAX] = {
/* A table that maps high-level dependency types to low-level dependency "atoms". The latter actually
* describe specific facets of dependency behaviour. The former combine them into one user-facing
* concept. Atoms are a bit mask, though a bunch of dependency types have only a single bit set.
*
* Typically when the user configures a dependency they go via dependency type, but when we act on
* them we go by atom.
*
* NB: when you add a new dependency type here, make sure to also add one to the (best-effort)
* reverse table in unit_dependency_from_unique_atom() further down. */
[UNIT_REQUIRES] = UNIT_ATOM_PULL_IN_START |
UNIT_ATOM_RETROACTIVE_START_REPLACE |
UNIT_ATOM_ADD_STOP_WHEN_UNNEEDED_QUEUE |
UNIT_ATOM_ADD_DEFAULT_TARGET_DEPENDENCY_QUEUE,
[UNIT_REQUISITE] = UNIT_ATOM_PULL_IN_VERIFY |
UNIT_ATOM_ADD_STOP_WHEN_UNNEEDED_QUEUE |
UNIT_ATOM_ADD_DEFAULT_TARGET_DEPENDENCY_QUEUE,
[UNIT_WANTS] = UNIT_ATOM_PULL_IN_START_IGNORED |
UNIT_ATOM_RETROACTIVE_START_FAIL |
UNIT_ATOM_ADD_STOP_WHEN_UNNEEDED_QUEUE |
UNIT_ATOM_ADD_DEFAULT_TARGET_DEPENDENCY_QUEUE,
[UNIT_BINDS_TO] = UNIT_ATOM_PULL_IN_START |
UNIT_ATOM_RETROACTIVE_START_REPLACE |
UNIT_ATOM_CANNOT_BE_ACTIVE_WITHOUT |
UNIT_ATOM_ADD_STOP_WHEN_UNNEEDED_QUEUE |
UNIT_ATOM_ADD_DEFAULT_TARGET_DEPENDENCY_QUEUE,
[UNIT_PART_OF] = UNIT_ATOM_ADD_DEFAULT_TARGET_DEPENDENCY_QUEUE,
[UNIT_UPHOLDS] = UNIT_ATOM_PULL_IN_START_IGNORED |
UNIT_ATOM_RETROACTIVE_START_REPLACE |
UNIT_ATOM_ADD_START_WHEN_UPHELD_QUEUE |
UNIT_ATOM_ADD_STOP_WHEN_UNNEEDED_QUEUE |
UNIT_ATOM_ADD_DEFAULT_TARGET_DEPENDENCY_QUEUE,
[UNIT_REQUIRED_BY] = UNIT_ATOM_PROPAGATE_STOP |
UNIT_ATOM_PROPAGATE_RESTART |
UNIT_ATOM_PROPAGATE_START_FAILURE |
UNIT_ATOM_PINS_STOP_WHEN_UNNEEDED |
UNIT_ATOM_DEFAULT_TARGET_DEPENDENCIES,
[UNIT_REQUISITE_OF] = UNIT_ATOM_PROPAGATE_STOP |
UNIT_ATOM_PROPAGATE_RESTART |
UNIT_ATOM_PROPAGATE_START_FAILURE |
UNIT_ATOM_PROPAGATE_INACTIVE_START_AS_FAILURE |
UNIT_ATOM_PINS_STOP_WHEN_UNNEEDED |
UNIT_ATOM_DEFAULT_TARGET_DEPENDENCIES,
[UNIT_WANTED_BY] = UNIT_ATOM_DEFAULT_TARGET_DEPENDENCIES |
UNIT_ATOM_PINS_STOP_WHEN_UNNEEDED,
[UNIT_BOUND_BY] = UNIT_ATOM_RETROACTIVE_STOP_ON_STOP |
UNIT_ATOM_PROPAGATE_STOP |
UNIT_ATOM_PROPAGATE_RESTART |
UNIT_ATOM_PROPAGATE_START_FAILURE |
UNIT_ATOM_PINS_STOP_WHEN_UNNEEDED |
UNIT_ATOM_ADD_CANNOT_BE_ACTIVE_WITHOUT_QUEUE |
UNIT_ATOM_DEFAULT_TARGET_DEPENDENCIES,
[UNIT_UPHELD_BY] = UNIT_ATOM_START_STEADILY |
UNIT_ATOM_DEFAULT_TARGET_DEPENDENCIES |
UNIT_ATOM_PINS_STOP_WHEN_UNNEEDED,
[UNIT_CONSISTS_OF] = UNIT_ATOM_PROPAGATE_STOP |
UNIT_ATOM_PROPAGATE_RESTART,
[UNIT_CONFLICTS] = UNIT_ATOM_PULL_IN_STOP |
UNIT_ATOM_RETROACTIVE_STOP_ON_START,
[UNIT_CONFLICTED_BY] = UNIT_ATOM_PULL_IN_STOP_IGNORED |
UNIT_ATOM_RETROACTIVE_STOP_ON_START |
UNIT_ATOM_PROPAGATE_STOP_FAILURE,
[UNIT_PROPAGATES_STOP_TO] = UNIT_ATOM_RETROACTIVE_STOP_ON_STOP |
UNIT_ATOM_PROPAGATE_STOP,
/* These are simple dependency types: they consist of a single atom only */
[UNIT_BEFORE] = UNIT_ATOM_BEFORE,
[UNIT_AFTER] = UNIT_ATOM_AFTER,
[UNIT_ON_SUCCESS] = UNIT_ATOM_ON_SUCCESS,
[UNIT_ON_FAILURE] = UNIT_ATOM_ON_FAILURE,
[UNIT_TRIGGERS] = UNIT_ATOM_TRIGGERS,
[UNIT_TRIGGERED_BY] = UNIT_ATOM_TRIGGERED_BY,
[UNIT_PROPAGATES_RELOAD_TO] = UNIT_ATOM_PROPAGATES_RELOAD_TO,
[UNIT_JOINS_NAMESPACE_OF] = UNIT_ATOM_JOINS_NAMESPACE_OF,
[UNIT_REFERENCES] = UNIT_ATOM_REFERENCES,
[UNIT_REFERENCED_BY] = UNIT_ATOM_REFERENCED_BY,
[UNIT_IN_SLICE] = UNIT_ATOM_IN_SLICE,
[UNIT_SLICE_OF] = UNIT_ATOM_SLICE_OF,
/* These are dependency types without effect on our state engine. We maintain them only to make
* things discoverable/debuggable as they are the inverse dependencies to some of the above. As they
* have no effect of their own, they all map to no atoms at all, i.e. the value 0. */
[UNIT_RELOAD_PROPAGATED_FROM] = 0,
[UNIT_ON_SUCCESS_OF] = 0,
[UNIT_ON_FAILURE_OF] = 0,
[UNIT_STOP_PROPAGATED_FROM] = 0,
};
UnitDependencyAtom unit_dependency_to_atom(UnitDependency d) {
if (d < 0)
return _UNIT_DEPENDENCY_ATOM_INVALID;
assert(d < _UNIT_DEPENDENCY_MAX);
return atom_map[d];
}
UnitDependency unit_dependency_from_unique_atom(UnitDependencyAtom atom) {
/* This is a "best-effort" function that maps the specified 'atom' mask to a dependency type that is
* is equal to or has a superset of bits set if that's uniquely possible. The idea is that this
* function is used when iterating through deps that have a specific atom: if there's exactly one
* dependency type of the specific atom we don't need iterate through all deps a unit has, but can
* pinpoint things directly.
*
* This function will return _UNIT_DEPENDENCY_INVALID in case the specified value is not known or not
* uniquely defined, i.e. there are multiple dependencies with the atom or the combination set. */
switch ((int64_t) atom) {
/* Note that we can't list UNIT_REQUIRES here since it's a true subset of UNIT_BINDS_TO, and
* hence its atom bits not uniquely mappable. */
case UNIT_ATOM_PULL_IN_VERIFY |
UNIT_ATOM_ADD_STOP_WHEN_UNNEEDED_QUEUE |
UNIT_ATOM_ADD_DEFAULT_TARGET_DEPENDENCY_QUEUE:
case UNIT_ATOM_PULL_IN_VERIFY: /* a single dep type uses this atom */
return UNIT_REQUISITE;
case UNIT_ATOM_PULL_IN_START_IGNORED |
UNIT_ATOM_RETROACTIVE_START_FAIL |
UNIT_ATOM_ADD_STOP_WHEN_UNNEEDED_QUEUE |
UNIT_ATOM_ADD_DEFAULT_TARGET_DEPENDENCY_QUEUE:
case UNIT_ATOM_RETROACTIVE_START_FAIL:
return UNIT_WANTS;
case UNIT_ATOM_PULL_IN_START |
UNIT_ATOM_RETROACTIVE_START_REPLACE |
UNIT_ATOM_CANNOT_BE_ACTIVE_WITHOUT |
UNIT_ATOM_ADD_STOP_WHEN_UNNEEDED_QUEUE |
UNIT_ATOM_ADD_DEFAULT_TARGET_DEPENDENCY_QUEUE:
case UNIT_ATOM_CANNOT_BE_ACTIVE_WITHOUT:
return UNIT_BINDS_TO;
case UNIT_ATOM_PULL_IN_START_IGNORED |
UNIT_ATOM_RETROACTIVE_START_REPLACE |
UNIT_ATOM_ADD_START_WHEN_UPHELD_QUEUE |
UNIT_ATOM_ADD_STOP_WHEN_UNNEEDED_QUEUE |
UNIT_ATOM_ADD_DEFAULT_TARGET_DEPENDENCY_QUEUE:
case UNIT_ATOM_ADD_START_WHEN_UPHELD_QUEUE:
return UNIT_UPHOLDS;
case UNIT_ATOM_PROPAGATE_STOP |
UNIT_ATOM_PROPAGATE_RESTART |
UNIT_ATOM_PROPAGATE_START_FAILURE |
UNIT_ATOM_PROPAGATE_INACTIVE_START_AS_FAILURE |
UNIT_ATOM_PINS_STOP_WHEN_UNNEEDED |
UNIT_ATOM_DEFAULT_TARGET_DEPENDENCIES:
case UNIT_ATOM_PROPAGATE_INACTIVE_START_AS_FAILURE:
return UNIT_REQUISITE_OF;
case UNIT_ATOM_RETROACTIVE_STOP_ON_STOP |
UNIT_ATOM_PROPAGATE_STOP |
UNIT_ATOM_PROPAGATE_RESTART |
UNIT_ATOM_PROPAGATE_START_FAILURE |
UNIT_ATOM_PINS_STOP_WHEN_UNNEEDED |
UNIT_ATOM_ADD_CANNOT_BE_ACTIVE_WITHOUT_QUEUE |
UNIT_ATOM_DEFAULT_TARGET_DEPENDENCIES:
case UNIT_ATOM_ADD_CANNOT_BE_ACTIVE_WITHOUT_QUEUE:
return UNIT_BOUND_BY;
case UNIT_ATOM_START_STEADILY |
UNIT_ATOM_DEFAULT_TARGET_DEPENDENCIES |
UNIT_ATOM_PINS_STOP_WHEN_UNNEEDED:
case UNIT_ATOM_START_STEADILY:
return UNIT_UPHELD_BY;
case UNIT_ATOM_PULL_IN_STOP |
UNIT_ATOM_RETROACTIVE_STOP_ON_START:
case UNIT_ATOM_PULL_IN_STOP:
return UNIT_CONFLICTS;
case UNIT_ATOM_PULL_IN_STOP_IGNORED |
UNIT_ATOM_RETROACTIVE_STOP_ON_START |
UNIT_ATOM_PROPAGATE_STOP_FAILURE:
case UNIT_ATOM_PULL_IN_STOP_IGNORED:
case UNIT_ATOM_PROPAGATE_STOP_FAILURE:
return UNIT_CONFLICTED_BY;
/* And now, the simple ones */
case UNIT_ATOM_BEFORE:
return UNIT_BEFORE;
case UNIT_ATOM_AFTER:
return UNIT_AFTER;
case UNIT_ATOM_ON_SUCCESS:
return UNIT_ON_SUCCESS;
case UNIT_ATOM_ON_FAILURE:
return UNIT_ON_FAILURE;
case UNIT_ATOM_TRIGGERS:
return UNIT_TRIGGERS;
case UNIT_ATOM_TRIGGERED_BY:
return UNIT_TRIGGERED_BY;
case UNIT_ATOM_PROPAGATES_RELOAD_TO:
return UNIT_PROPAGATES_RELOAD_TO;
case UNIT_ATOM_JOINS_NAMESPACE_OF:
return UNIT_JOINS_NAMESPACE_OF;
case UNIT_ATOM_REFERENCES:
return UNIT_REFERENCES;
case UNIT_ATOM_REFERENCED_BY:
return UNIT_REFERENCED_BY;
case UNIT_ATOM_IN_SLICE:
return UNIT_IN_SLICE;
case UNIT_ATOM_SLICE_OF:
return UNIT_SLICE_OF;
default:
return _UNIT_DEPENDENCY_INVALID;
}
}

View File

@ -0,0 +1,87 @@
/* SPDX-License-Identifier: LGPL-2.1-or-later */
#pragma once
#include <errno.h>
#include "unit-def.h"
/* Flags that identify the various "atomic" behaviours a specific dependency type implies. Each dependency is
* a combination of one or more of these flags that define what they actually entail. */
typedef enum UnitDependencyAtom {
/* This unit pulls in the other unit as JOB_START job into the transaction, and if that doesn't work
* the transaction fails. */
UNIT_ATOM_PULL_IN_START = UINT64_C(1) << 0,
/* Similar, but if it doesn't work, ignore. */
UNIT_ATOM_PULL_IN_START_IGNORED = UINT64_C(1) << 1,
/* Pull in a JOB_VERIFY job into the transaction, i.e. pull in JOB_VERIFY rather than
* JOB_START. i.e. check the unit is started but don't pull it in. */
UNIT_ATOM_PULL_IN_VERIFY = UINT64_C(1) << 2,
/* Pull in a JOB_STOP job for the other job into transactions, and fail if that doesn't work. */
UNIT_ATOM_PULL_IN_STOP = UINT64_C(1) << 3,
/* Same, but don't fail, ignore it. */
UNIT_ATOM_PULL_IN_STOP_IGNORED = UINT64_C(1) << 4,
/* If our enters inactive state, add the other unit to the StopWhenUneeded= queue */
UNIT_ATOM_ADD_STOP_WHEN_UNNEEDED_QUEUE = UINT64_C(1) << 5,
/* Pin the other unit i.e. ensure StopWhenUneeded= won't trigger for the other unit as long as we are
* not in inactive state */
UNIT_ATOM_PINS_STOP_WHEN_UNNEEDED = UINT64_C(1) << 6,
/* Stop our unit if the other unit happens to inactive */
UNIT_ATOM_CANNOT_BE_ACTIVE_WITHOUT = UINT64_C(1) << 7,
/* If our unit enters inactive state, add the other unit to the BoundBy= queue */
UNIT_ATOM_ADD_CANNOT_BE_ACTIVE_WITHOUT_QUEUE = UINT64_C(1) << 8,
/* Start this unit whenever we find it inactive and the other unit active */
UNIT_ATOM_START_STEADILY = UINT64_C(1) << 9,
/* Whenever our unit becomes active, add other unit to start_when_upheld_queue */
UNIT_ATOM_ADD_START_WHEN_UPHELD_QUEUE = UINT64_C(1) << 10,
/* If our unit unexpectedly becomes active, retroactively start the other unit too, in "replace" job
* mode */
UNIT_ATOM_RETROACTIVE_START_REPLACE = UINT64_C(1) << 11,
/* Similar, but in "fail" job mode */
UNIT_ATOM_RETROACTIVE_START_FAIL = UINT64_C(1) << 12,
/* If our unit unexpectedly becomes active, retroactively stop the other unit too */
UNIT_ATOM_RETROACTIVE_STOP_ON_START = UINT64_C(1) << 13,
/* If our unit unexpectedly becomes inactive, retroactively stop the other unit too */
UNIT_ATOM_RETROACTIVE_STOP_ON_STOP = UINT64_C(1) << 14,
/* If a start job for this unit fails, propagate the failure to start job of other unit too */
UNIT_ATOM_PROPAGATE_START_FAILURE = UINT64_C(1) << 15,
/* If a stop job for this unit fails, propagate the failure to any stop job of the other unit too */
UNIT_ATOM_PROPAGATE_STOP_FAILURE = UINT64_C(1) << 16,
/* If our start job succeeded but the unit is inactive then (think: oneshot units), propagate this as
* failure to the other unit. */
UNIT_ATOM_PROPAGATE_INACTIVE_START_AS_FAILURE = UINT64_C(1) << 17,
/* When putting together a transaction, propagate JOB_STOP from our unit to the other. */
UNIT_ATOM_PROPAGATE_STOP = UINT64_C(1) << 18,
/* When putting together a transaction, propagate JOB_RESTART from our unit to the other. */
UNIT_ATOM_PROPAGATE_RESTART = UINT64_C(1) << 19,
/* Add the other unit to the default target dependency queue */
UNIT_ATOM_ADD_DEFAULT_TARGET_DEPENDENCY_QUEUE = UINT64_C(1) << 20,
/* Recheck default target deps on other units (which are target units) */
UNIT_ATOM_DEFAULT_TARGET_DEPENDENCIES = UINT64_C(1) << 21,
/* The remaining atoms map 1:1 to the equally named high-level deps */
UNIT_ATOM_BEFORE = UINT64_C(1) << 22,
UNIT_ATOM_AFTER = UINT64_C(1) << 23,
UNIT_ATOM_ON_SUCCESS = UINT64_C(1) << 24,
UNIT_ATOM_ON_FAILURE = UINT64_C(1) << 25,
UNIT_ATOM_TRIGGERS = UINT64_C(1) << 26,
UNIT_ATOM_TRIGGERED_BY = UINT64_C(1) << 27,
UNIT_ATOM_PROPAGATES_RELOAD_TO = UINT64_C(1) << 28,
UNIT_ATOM_JOINS_NAMESPACE_OF = UINT64_C(1) << 29,
UNIT_ATOM_REFERENCES = UINT64_C(1) << 30,
UNIT_ATOM_REFERENCED_BY = UINT64_C(1) << 31,
UNIT_ATOM_IN_SLICE = UINT64_C(1) << 32,
UNIT_ATOM_SLICE_OF = UINT64_C(1) << 33,
_UNIT_DEPENDENCY_ATOM_MAX = (UINT64_C(1) << 34) - 1,
_UNIT_DEPENDENCY_ATOM_INVALID = -EINVAL,
} UnitDependencyAtom;
UnitDependencyAtom unit_dependency_to_atom(UnitDependency d);
UnitDependency unit_dependency_from_unique_atom(UnitDependencyAtom atom);

View File

@ -132,18 +132,15 @@ static int specifier_cgroup_root(char specifier, const void *data, const void *u
}
static int specifier_cgroup_slice(char specifier, const void *data, const void *userdata, char **ret) {
const Unit *u = userdata;
const Unit *u = userdata, *slice;
char *n;
assert(u);
bad_specifier(u, specifier);
if (UNIT_ISSET(u->slice)) {
const Unit *slice;
slice = UNIT_DEREF(u->slice);
slice = UNIT_GET_SLICE(u);
if (slice) {
if (slice->cgroup_path)
n = strdup(slice->cgroup_path);
else

View File

@ -604,10 +604,7 @@ void unit_dump(Unit *u, FILE *f, const char *prefix) {
"%s\tNeed Daemon Reload: %s\n"
"%s\tTransient: %s\n"
"%s\tPerpetual: %s\n"
"%s\tGarbage Collection Mode: %s\n"
"%s\tSlice: %s\n"
"%s\tCGroup: %s\n"
"%s\tCGroup realized: %s\n",
"%s\tGarbage Collection Mode: %s\n",
prefix, unit_description(u),
prefix, strna(u->instance),
prefix, unit_load_state_to_string(u->load_state),
@ -621,10 +618,7 @@ void unit_dump(Unit *u, FILE *f, const char *prefix) {
prefix, yes_no(unit_need_daemon_reload(u)),
prefix, yes_no(u->transient),
prefix, yes_no(u->perpetual),
prefix, collect_mode_to_string(u->collect_mode),
prefix, strna(unit_slice_name(u)),
prefix, strna(u->cgroup_path),
prefix, yes_no(u->cgroup_realized));
prefix, collect_mode_to_string(u->collect_mode));
if (u->markers != 0) {
fprintf(f, "%s\tMarkers:", prefix);
@ -635,37 +629,47 @@ void unit_dump(Unit *u, FILE *f, const char *prefix) {
fputs("\n", f);
}
if (u->cgroup_realized_mask != 0) {
_cleanup_free_ char *s = NULL;
(void) cg_mask_to_string(u->cgroup_realized_mask, &s);
fprintf(f, "%s\tCGroup realized mask: %s\n", prefix, strnull(s));
}
if (UNIT_HAS_CGROUP_CONTEXT(u)) {
fprintf(f,
"%s\tSlice: %s\n"
"%s\tCGroup: %s\n"
"%s\tCGroup realized: %s\n",
prefix, strna(unit_slice_name(u)),
prefix, strna(u->cgroup_path),
prefix, yes_no(u->cgroup_realized));
if (u->cgroup_enabled_mask != 0) {
_cleanup_free_ char *s = NULL;
(void) cg_mask_to_string(u->cgroup_enabled_mask, &s);
fprintf(f, "%s\tCGroup enabled mask: %s\n", prefix, strnull(s));
}
if (u->cgroup_realized_mask != 0) {
_cleanup_free_ char *s = NULL;
(void) cg_mask_to_string(u->cgroup_realized_mask, &s);
fprintf(f, "%s\tCGroup realized mask: %s\n", prefix, strnull(s));
}
m = unit_get_own_mask(u);
if (m != 0) {
_cleanup_free_ char *s = NULL;
(void) cg_mask_to_string(m, &s);
fprintf(f, "%s\tCGroup own mask: %s\n", prefix, strnull(s));
}
if (u->cgroup_enabled_mask != 0) {
_cleanup_free_ char *s = NULL;
(void) cg_mask_to_string(u->cgroup_enabled_mask, &s);
fprintf(f, "%s\tCGroup enabled mask: %s\n", prefix, strnull(s));
}
m = unit_get_members_mask(u);
if (m != 0) {
_cleanup_free_ char *s = NULL;
(void) cg_mask_to_string(m, &s);
fprintf(f, "%s\tCGroup members mask: %s\n", prefix, strnull(s));
}
m = unit_get_own_mask(u);
if (m != 0) {
_cleanup_free_ char *s = NULL;
(void) cg_mask_to_string(m, &s);
fprintf(f, "%s\tCGroup own mask: %s\n", prefix, strnull(s));
}
m = unit_get_delegate_mask(u);
if (m != 0) {
_cleanup_free_ char *s = NULL;
(void) cg_mask_to_string(m, &s);
fprintf(f, "%s\tCGroup delegate mask: %s\n", prefix, strnull(s));
m = unit_get_members_mask(u);
if (m != 0) {
_cleanup_free_ char *s = NULL;
(void) cg_mask_to_string(m, &s);
fprintf(f, "%s\tCGroup members mask: %s\n", prefix, strnull(s));
}
m = unit_get_delegate_mask(u);
if (m != 0) {
_cleanup_free_ char *s = NULL;
(void) cg_mask_to_string(m, &s);
fprintf(f, "%s\tCGroup delegate mask: %s\n", prefix, strnull(s));
}
}
if (!sd_id128_is_null(u->invocation_id))
@ -735,7 +739,7 @@ void unit_dump(Unit *u, FILE *f, const char *prefix) {
UnitDependencyInfo di;
Unit *other;
HASHMAP_FOREACH_KEY(di.data, other, u->dependencies[d]) {
HASHMAP_FOREACH_KEY(di.data, other, unit_get_dependencies(u, d)) {
bool space = false;
fprintf(f, "%s\t%s: %s (", prefix, unit_dependency_to_string(d), other->id);
@ -770,12 +774,14 @@ void unit_dump(Unit *u, FILE *f, const char *prefix) {
"%s\tRefuseManualStart: %s\n"
"%s\tRefuseManualStop: %s\n"
"%s\tDefaultDependencies: %s\n"
"%s\tOnSuccessJobMode: %s\n"
"%s\tOnFailureJobMode: %s\n"
"%s\tIgnoreOnIsolate: %s\n",
prefix, yes_no(u->stop_when_unneeded),
prefix, yes_no(u->refuse_manual_start),
prefix, yes_no(u->refuse_manual_stop),
prefix, yes_no(u->default_dependencies),
prefix, job_mode_to_string(u->on_success_job_mode),
prefix, job_mode_to_string(u->on_failure_job_mode),
prefix, yes_no(u->ignore_on_isolate));

File diff suppressed because it is too large Load Diff

View File

@ -102,6 +102,16 @@ typedef union UnitDependencyInfo {
} _packed_;
} UnitDependencyInfo;
/* Newer LLVM versions don't like implicit casts from large pointer types to smaller enums, hence let's add
* explicit type-safe helpers for that. */
static inline UnitDependency UNIT_DEPENDENCY_FROM_PTR(const void *p) {
return PTR_TO_INT(p);
}
static inline void* UNIT_DEPENDENCY_TO_PTR(UnitDependency d) {
return INT_TO_PTR(d);
}
#include "job.h"
struct UnitRef {
@ -125,11 +135,13 @@ typedef struct Unit {
Set *aliases; /* All the other names. */
/* For each dependency type we maintain a Hashmap whose key is the Unit* object, and the value encodes why the
* dependency exists, using the UnitDependencyInfo type */
Hashmap *dependencies[_UNIT_DEPENDENCY_MAX];
/* For each dependency type we can look up another Hashmap with this, whose key is a Unit* object,
* and whose value encodes why the dependency exists, using the UnitDependencyInfo type. i.e. a
* Hashmap(UnitDependency Hashmap(Unit* UnitDependencyInfo)) */
Hashmap *dependencies;
/* Similar, for RequiresMountsFor= path dependencies. The key is the path, the value the UnitDependencyInfo type */
/* Similar, for RequiresMountsFor= path dependencies. The key is the path, the value the
* UnitDependencyInfo type */
Hashmap *requires_mounts_for;
char *description;
@ -190,8 +202,6 @@ typedef struct Unit {
dual_timestamp active_exit_timestamp;
dual_timestamp inactive_enter_timestamp;
UnitRef slice;
/* Per type list */
LIST_FIELDS(Unit, units_by_type);
@ -219,9 +229,15 @@ typedef struct Unit {
/* Target dependencies queue */
LIST_FIELDS(Unit, target_deps_queue);
/* Queue of units with StopWhenUnneeded set that shell be checked for clean-up. */
/* Queue of units with StopWhenUnneeded= set that shall be checked for clean-up. */
LIST_FIELDS(Unit, stop_when_unneeded_queue);
/* Queue of units that have an Uphold= dependency from some other unit, and should be checked for starting */
LIST_FIELDS(Unit, start_when_upheld_queue);
/* Queue of units that have a BindTo= dependency on some other unit, and should possibly be shut down */
LIST_FIELDS(Unit, stop_when_bound_queue);
/* PIDs we keep an eye on. Note that a unit might have many
* more, but these are the ones we care enough about to
* process SIGCHLD for */
@ -250,8 +266,8 @@ typedef struct Unit {
int success_action_exit_status, failure_action_exit_status;
char *reboot_arg;
/* Make sure we never enter endless loops with the check unneeded logic, or the BindsTo= logic */
RateLimit auto_stop_ratelimit;
/* Make sure we never enter endless loops with the StopWhenUnneeded=, BindsTo=, Uphold= logic */
RateLimit auto_start_stop_ratelimit;
/* Reference to a specific UID/GID */
uid_t ref_uid;
@ -324,7 +340,8 @@ typedef struct Unit {
* ones which might have appeared. */
sd_event_source *rewatch_pids_event_source;
/* How to start OnFailure units */
/* How to start OnSuccess=/OnFailure= units */
JobMode on_success_job_mode;
JobMode on_failure_job_mode;
/* Tweaking the GC logic */
@ -372,6 +389,8 @@ typedef struct Unit {
bool in_cgroup_oom_queue:1;
bool in_target_deps_queue:1;
bool in_stop_when_unneeded_queue:1;
bool in_start_when_upheld_queue:1;
bool in_stop_when_bound_queue:1;
bool sent_dbus_new_signal:1;
@ -683,8 +702,19 @@ static inline const UnitVTable* UNIT_VTABLE(const Unit *u) {
#define UNIT_HAS_CGROUP_CONTEXT(u) (UNIT_VTABLE(u)->cgroup_context_offset > 0)
#define UNIT_HAS_KILL_CONTEXT(u) (UNIT_VTABLE(u)->kill_context_offset > 0)
Unit* unit_has_dependency(const Unit *u, UnitDependencyAtom atom, Unit *other);
int unit_get_dependency_array(const Unit *u, UnitDependencyAtom atom, Unit ***ret_array);
static inline Hashmap* unit_get_dependencies(Unit *u, UnitDependency d) {
return hashmap_get(u->dependencies, UNIT_DEPENDENCY_TO_PTR(d));
}
static inline Unit* UNIT_TRIGGER(Unit *u) {
return hashmap_first_key(u->dependencies[UNIT_TRIGGERS]);
return unit_has_dependency(u, UNIT_ATOM_TRIGGERS, NULL);
}
static inline Unit* UNIT_GET_SLICE(const Unit *u) {
return unit_has_dependency(u, UNIT_ATOM_IN_SLICE, NULL);
}
Unit* unit_new(Manager *m, size_t size);
@ -718,6 +748,8 @@ void unit_add_to_cleanup_queue(Unit *u);
void unit_add_to_gc_queue(Unit *u);
void unit_add_to_target_deps_queue(Unit *u);
void unit_submit_to_stop_when_unneeded_queue(Unit *u);
void unit_submit_to_start_when_upheld_queue(Unit *u);
void unit_submit_to_stop_when_bound_queue(Unit *u);
int unit_merge(Unit *u, Unit *other);
int unit_merge_by_name(Unit *u, const char *other);
@ -727,7 +759,7 @@ Unit *unit_follow_merge(Unit *u) _pure_;
int unit_load_fragment_and_dropin(Unit *u, bool fragment_required);
int unit_load(Unit *unit);
int unit_set_slice(Unit *u, Unit *slice);
int unit_set_slice(Unit *u, Unit *slice, UnitDependencyMask mask);
int unit_set_default_slice(Unit *u);
const char *unit_description(Unit *u) _pure_;
@ -807,7 +839,7 @@ bool unit_will_restart(Unit *u);
int unit_add_default_target_dependency(Unit *u, Unit *target);
void unit_start_on_failure(Unit *u);
void unit_start_on_failure(Unit *u, const char *dependency_name, UnitDependencyAtom atom, JobMode job_mode);
void unit_trigger_notify(Unit *u);
UnitFileState unit_get_unit_file_state(Unit *u);
@ -847,6 +879,8 @@ bool unit_type_supported(UnitType t);
bool unit_is_pristine(Unit *u);
bool unit_is_unneeded(Unit *u);
bool unit_is_upheld_by_active(Unit *u, Unit **ret_culprit);
bool unit_is_bound_by_inactive(Unit *u, Unit **ret_culprit);
pid_t unit_control_pid(Unit *u);
pid_t unit_main_pid(Unit *u);
@ -990,3 +1024,54 @@ int unit_thaw_vtable_common(Unit *u);
const char* collect_mode_to_string(CollectMode m) _const_;
CollectMode collect_mode_from_string(const char *s) _pure_;
typedef struct UnitForEachDependencyData {
/* Stores state for the FOREACH macro below for iterating through all deps that have any of the
* specified dependency atom bits set */
UnitDependencyAtom match_atom;
Hashmap *by_type, *by_unit;
void *current_type;
Iterator by_type_iterator, by_unit_iterator;
Unit **current_unit;
} UnitForEachDependencyData;
/* Iterates through all dependencies that have a specific atom in the dependency type set. This tries to be
* smart: if the atom is unique, we'll directly go to right entry. Otherwise we'll iterate through the
* per-dependency type hashmap and match all dep that have the right atom set. */
#define _UNIT_FOREACH_DEPENDENCY(other, u, ma, data) \
for (UnitForEachDependencyData data = { \
.match_atom = (ma), \
.by_type = (u)->dependencies, \
.by_type_iterator = ITERATOR_FIRST, \
.current_unit = &(other), \
}; \
({ \
UnitDependency _dt = _UNIT_DEPENDENCY_INVALID; \
bool _found; \
\
if (data.by_type && ITERATOR_IS_FIRST(data.by_type_iterator)) { \
_dt = unit_dependency_from_unique_atom(data.match_atom); \
if (_dt >= 0) { \
data.by_unit = hashmap_get(data.by_type, UNIT_DEPENDENCY_TO_PTR(_dt)); \
data.current_type = UNIT_DEPENDENCY_TO_PTR(_dt); \
data.by_type = NULL; \
_found = !!data.by_unit; \
} \
} \
if (_dt < 0) \
_found = hashmap_iterate(data.by_type, \
&data.by_type_iterator, \
(void**)&(data.by_unit), \
(const void**) &(data.current_type)); \
_found; \
}); ) \
if ((unit_dependency_to_atom(UNIT_DEPENDENCY_FROM_PTR(data.current_type)) & data.match_atom) != 0) \
for (data.by_unit_iterator = ITERATOR_FIRST; \
hashmap_iterate(data.by_unit, \
&data.by_unit_iterator, \
NULL, \
(const void**) data.current_unit); )
/* Note: this matches deps that have *any* of the atoms specified in match_atom set */
#define UNIT_FOREACH_DEPENDENCY(other, u, match_atom) \
_UNIT_FOREACH_DEPENDENCY(other, u, match_atom, UNIQ_T(data, UNIQ))

View File

@ -97,7 +97,7 @@ static int help(void) {
" Whether to require user verification to unlock the volume\n"
" --tpm2-device=PATH\n"
" Enroll a TPM2 device\n"
" --tpm2-pcrs=PCR1,PCR2,PCR3,…\n"
" --tpm2-pcrs=PCR1+PCR2+PCR3,…\n"
" Specify TPM2 PCRs to seal against\n"
" --wipe-slot=SLOT1,SLOT2,…\n"
" Wipe specified slots\n"

View File

@ -1086,9 +1086,15 @@ static int terminate_user(int argc, char *argv[], void *userdata) {
for (int i = 1; i < argc; i++) {
uid_t uid;
r = get_user_creds((const char**) (argv+i), &uid, NULL, NULL, NULL, 0);
if (r < 0)
return log_error_errno(r, "Failed to look up user %s: %m", argv[i]);
if (isempty(argv[i]))
uid = getuid();
else {
const char *u = argv[i];
r = get_user_creds(&u, &uid, NULL, NULL, NULL, 0);
if (r < 0)
return log_error_errno(r, "Failed to look up user %s: %m", argv[i]);
}
r = bus_call_method(bus, bus_login_mgr, "TerminateUser", &error, NULL, "u", (uint32_t) uid);
if (r < 0)
@ -1114,9 +1120,15 @@ static int kill_user(int argc, char *argv[], void *userdata) {
for (int i = 1; i < argc; i++) {
uid_t uid;
r = get_user_creds((const char**) (argv+i), &uid, NULL, NULL, NULL, 0);
if (r < 0)
return log_error_errno(r, "Failed to look up user %s: %m", argv[i]);
if (isempty(argv[i]))
uid = getuid();
else {
const char *u = argv[i];
r = get_user_creds(&u, &uid, NULL, NULL, NULL, 0);
if (r < 0)
return log_error_errno(r, "Failed to look up user %s: %m", argv[i]);
}
r = bus_call_method(
bus,

View File

@ -4070,7 +4070,7 @@ static int help(void) {
" --definitions=DIR Find partition definitions in specified directory\n"
" --key-file=PATH Key to use when encrypting partitions\n"
" --tpm2-device=PATH Path to TPM2 device node to use\n"
" --tpm2-pcrs=PCR1,PCR2,\n"
" --tpm2-pcrs=PCR1+PCR2+PCR3+\n"
" TPM2 PCR indexes to use for TPM2 enrollment\n"
" --seed=UUID 128bit seed UUID to derive all UUIDs from\n"
" --size=BYTES Grow loopback file to specified size\n"

View File

@ -1335,7 +1335,7 @@ int dns_name_apply_idna(const char *name, char **ret) {
return -EINVAL;
#elif HAVE_LIBIDN
_cleanup_free_ char *buf = NULL;
size_t n = 0, allocated = 0;
size_t n = 0;
bool first = true;
int r, q;
@ -1357,7 +1357,7 @@ int dns_name_apply_idna(const char *name, char **ret) {
if (q > 0)
r = q;
if (!GREEDY_REALLOC(buf, allocated, n + !first + DNS_LABEL_ESCAPED_MAX))
if (!GREEDY_REALLOC(buf, n + !first + DNS_LABEL_ESCAPED_MAX))
return -ENOMEM;
r = dns_label_escape(label, r, buf + n + !first, DNS_LABEL_ESCAPED_MAX);
@ -1375,7 +1375,7 @@ int dns_name_apply_idna(const char *name, char **ret) {
if (n > DNS_HOSTNAME_MAX)
return -EINVAL;
if (!GREEDY_REALLOC(buf, allocated, n + 1))
if (!GREEDY_REALLOC(buf, n + 1))
return -ENOMEM;
buf[n] = 0;

View File

@ -920,13 +920,23 @@ int tpm2_parse_pcrs(const char *s, uint32_t *ret) {
uint32_t mask = 0;
int r;
/* Parses a comma-separated list of PCR indexes */
assert(s);
if (isempty(s)) {
*ret = 0;
return 0;
}
/* Parses a "," or "+" separated list of PCR indexes. We support "," since this is a list after all,
* and most other tools expect comma separated PCR specifications. We also support "+" since in
* /etc/crypttab the "," is already used to separate options, hence a different separator is nice to
* avoid escaping. */
for (;;) {
_cleanup_free_ char *pcr = NULL;
unsigned n;
r = extract_first_word(&p, &pcr, ",", EXTRACT_DONT_COALESCE_SEPARATORS);
r = extract_first_word(&p, &pcr, ",+", EXTRACT_DONT_COALESCE_SEPARATORS);
if (r == 0)
break;
if (r < 0)

View File

@ -422,6 +422,8 @@ tests += [
[['src/test/test-sleep.c']],
[['src/test/test-tpm2.c']],
[['src/test/test-replace-var.c']],
[['src/test/test-calendarspec.c']],

View File

@ -70,13 +70,13 @@ static int test_cgroup_mask(void) {
assert_se(manager_load_startable_unit_or_warn(m, "parent-deep.slice", NULL, &parent_deep) >= 0);
assert_se(manager_load_startable_unit_or_warn(m, "nomem.slice", NULL, &nomem_parent) >= 0);
assert_se(manager_load_startable_unit_or_warn(m, "nomemleaf.service", NULL, &nomem_leaf) >= 0);
assert_se(UNIT_DEREF(son->slice) == parent);
assert_se(UNIT_DEREF(daughter->slice) == parent);
assert_se(UNIT_DEREF(parent_deep->slice) == parent);
assert_se(UNIT_DEREF(grandchild->slice) == parent_deep);
assert_se(UNIT_DEREF(nomem_leaf->slice) == nomem_parent);
root = UNIT_DEREF(parent->slice);
assert_se(UNIT_DEREF(nomem_parent->slice) == root);
assert_se(UNIT_GET_SLICE(son) == parent);
assert_se(UNIT_GET_SLICE(daughter) == parent);
assert_se(UNIT_GET_SLICE(parent_deep) == parent);
assert_se(UNIT_GET_SLICE(grandchild) == parent_deep);
assert_se(UNIT_GET_SLICE(nomem_leaf) == nomem_parent);
root = UNIT_GET_SLICE(parent);
assert_se(UNIT_GET_SLICE(nomem_parent) == root);
/* Verify per-unit cgroups settings. */
ASSERT_CGROUP_MASK_JOINED(unit_get_own_mask(son), CGROUP_MASK_CPU);

View File

@ -91,28 +91,28 @@ static int test_default_memory_low(void) {
assert_se(manager_load_startable_unit_or_warn(m, "dml.slice", NULL, &dml) >= 0);
assert_se(manager_load_startable_unit_or_warn(m, "dml-passthrough.slice", NULL, &dml_passthrough) >= 0);
assert_se(UNIT_DEREF(dml_passthrough->slice) == dml);
assert_se(UNIT_GET_SLICE(dml_passthrough) == dml);
assert_se(manager_load_startable_unit_or_warn(m, "dml-passthrough-empty.service", NULL, &dml_passthrough_empty) >= 0);
assert_se(UNIT_DEREF(dml_passthrough_empty->slice) == dml_passthrough);
assert_se(UNIT_GET_SLICE(dml_passthrough_empty) == dml_passthrough);
assert_se(manager_load_startable_unit_or_warn(m, "dml-passthrough-set-dml.service", NULL, &dml_passthrough_set_dml) >= 0);
assert_se(UNIT_DEREF(dml_passthrough_set_dml->slice) == dml_passthrough);
assert_se(UNIT_GET_SLICE(dml_passthrough_set_dml) == dml_passthrough);
assert_se(manager_load_startable_unit_or_warn(m, "dml-passthrough-set-ml.service", NULL, &dml_passthrough_set_ml) >= 0);
assert_se(UNIT_DEREF(dml_passthrough_set_ml->slice) == dml_passthrough);
assert_se(UNIT_GET_SLICE(dml_passthrough_set_ml) == dml_passthrough);
assert_se(manager_load_startable_unit_or_warn(m, "dml-override.slice", NULL, &dml_override) >= 0);
assert_se(UNIT_DEREF(dml_override->slice) == dml);
assert_se(UNIT_GET_SLICE(dml_override) == dml);
assert_se(manager_load_startable_unit_or_warn(m, "dml-override-empty.service", NULL, &dml_override_empty) >= 0);
assert_se(UNIT_DEREF(dml_override_empty->slice) == dml_override);
assert_se(UNIT_GET_SLICE(dml_override_empty) == dml_override);
assert_se(manager_load_startable_unit_or_warn(m, "dml-discard.slice", NULL, &dml_discard) >= 0);
assert_se(UNIT_DEREF(dml_discard->slice) == dml);
assert_se(UNIT_GET_SLICE(dml_discard) == dml);
assert_se(manager_load_startable_unit_or_warn(m, "dml-discard-empty.service", NULL, &dml_discard_empty) >= 0);
assert_se(UNIT_DEREF(dml_discard_empty->slice) == dml_discard);
assert_se(UNIT_GET_SLICE(dml_discard_empty) == dml_discard);
assert_se(manager_load_startable_unit_or_warn(m, "dml-discard-set-ml.service", NULL, &dml_discard_set_ml) >= 0);
assert_se(UNIT_DEREF(dml_discard_set_ml->slice) == dml_discard);
assert_se(UNIT_GET_SLICE(dml_discard_set_ml) == dml_discard);
root = UNIT_DEREF(dml->slice);
assert_se(!UNIT_ISSET(root->slice));
assert_se(root = UNIT_GET_SLICE(dml));
assert_se(!UNIT_GET_SLICE(root));
assert_se(unit_get_ancestor_memory_low(root) == CGROUP_LIMIT_MIN);

View File

@ -6,16 +6,75 @@
#include "bus-util.h"
#include "manager.h"
#include "rm-rf.h"
#include "service.h"
#include "special.h"
#include "strv.h"
#include "tests.h"
#include "service.h"
#include "unit-serialize.h"
static void verify_dependency_atoms(void) {
UnitDependencyAtom combined = 0, multi_use_atoms = 0;
/* Let's guarantee that our dependency type/atom translation tables are fully correct */
for (UnitDependency d = 0; d < _UNIT_DEPENDENCY_MAX; d++) {
UnitDependencyAtom a;
UnitDependency reverse;
bool has_superset = false;
assert_se((a = unit_dependency_to_atom(d)) >= 0);
for (UnitDependency t = 0; t < _UNIT_DEPENDENCY_MAX; t++) {
UnitDependencyAtom b;
if (t == d)
continue;
assert_se((b = unit_dependency_to_atom(t)) >= 0);
if ((a & b) == a) {
has_superset = true;
break;
}
}
reverse = unit_dependency_from_unique_atom(a);
assert_se(reverse == _UNIT_DEPENDENCY_INVALID || reverse >= 0);
assert_se((reverse < 0) == has_superset); /* If one dependency type is a superset of another,
* then the reverse mapping is not unique, verify
* that. */
log_info("Verified dependency type: %s", unit_dependency_to_string(d));
multi_use_atoms |= combined & a;
combined |= a;
}
/* Make sure all atoms are used, i.e. there's at least one dependency type that references it. */
assert_se(combined == _UNIT_DEPENDENCY_ATOM_MAX);
for (UnitDependencyAtom a = 1; a <= _UNIT_DEPENDENCY_ATOM_MAX; a <<= 1) {
if (multi_use_atoms & a) {
/* If an atom is used by multiple dep types, then mapping the atom to a dependency is
* not unique and *must* fail */
assert_se(unit_dependency_from_unique_atom(a) == _UNIT_DEPENDENCY_INVALID);
continue;
}
/* If only a single dep type uses specific atom, let's guarantee our mapping table is
complete, and thus the atom can be mapped to the single dep type that is used. */
assert_se(unit_dependency_from_unique_atom(a) >= 0);
}
}
int main(int argc, char *argv[]) {
_cleanup_(rm_rf_physical_and_freep) char *runtime_dir = NULL;
_cleanup_(sd_bus_error_free) sd_bus_error err = SD_BUS_ERROR_NULL;
_cleanup_(manager_freep) Manager *m = NULL;
Unit *a = NULL, *b = NULL, *c = NULL, *d = NULL, *e = NULL, *g = NULL,
*h = NULL, *i = NULL, *a_conj = NULL, *unit_with_multiple_dashes = NULL;
*h = NULL, *i = NULL, *a_conj = NULL, *unit_with_multiple_dashes = NULL, *stub = NULL;
Job *j;
int r;
@ -122,37 +181,83 @@ int main(int argc, char *argv[]) {
assert_se(manager_add_job(m, JOB_START, a_conj, JOB_REPLACE, NULL, NULL, &j) == -EDEADLK);
manager_dump_jobs(m, stdout, "\t");
assert_se(!hashmap_get(a->dependencies[UNIT_PROPAGATES_RELOAD_TO], b));
assert_se(!hashmap_get(b->dependencies[UNIT_RELOAD_PROPAGATED_FROM], a));
assert_se(!hashmap_get(a->dependencies[UNIT_PROPAGATES_RELOAD_TO], c));
assert_se(!hashmap_get(c->dependencies[UNIT_RELOAD_PROPAGATED_FROM], a));
assert_se(!hashmap_get(unit_get_dependencies(a, UNIT_PROPAGATES_RELOAD_TO), b));
assert_se(!hashmap_get(unit_get_dependencies(b, UNIT_RELOAD_PROPAGATED_FROM), a));
assert_se(!hashmap_get(unit_get_dependencies(a, UNIT_PROPAGATES_RELOAD_TO), c));
assert_se(!hashmap_get(unit_get_dependencies(c, UNIT_RELOAD_PROPAGATED_FROM), a));
assert_se(unit_add_dependency(a, UNIT_PROPAGATES_RELOAD_TO, b, true, UNIT_DEPENDENCY_UDEV) == 0);
assert_se(unit_add_dependency(a, UNIT_PROPAGATES_RELOAD_TO, c, true, UNIT_DEPENDENCY_PROC_SWAP) == 0);
assert_se(hashmap_get(a->dependencies[UNIT_PROPAGATES_RELOAD_TO], b));
assert_se(hashmap_get(b->dependencies[UNIT_RELOAD_PROPAGATED_FROM], a));
assert_se(hashmap_get(a->dependencies[UNIT_PROPAGATES_RELOAD_TO], c));
assert_se(hashmap_get(c->dependencies[UNIT_RELOAD_PROPAGATED_FROM], a));
assert_se(hashmap_get(unit_get_dependencies(a, UNIT_PROPAGATES_RELOAD_TO), b));
assert_se(hashmap_get(unit_get_dependencies(b, UNIT_RELOAD_PROPAGATED_FROM), a));
assert_se(hashmap_get(unit_get_dependencies(a, UNIT_PROPAGATES_RELOAD_TO), c));
assert_se(hashmap_get(unit_get_dependencies(c, UNIT_RELOAD_PROPAGATED_FROM), a));
unit_remove_dependencies(a, UNIT_DEPENDENCY_UDEV);
assert_se(!hashmap_get(a->dependencies[UNIT_PROPAGATES_RELOAD_TO], b));
assert_se(!hashmap_get(b->dependencies[UNIT_RELOAD_PROPAGATED_FROM], a));
assert_se(hashmap_get(a->dependencies[UNIT_PROPAGATES_RELOAD_TO], c));
assert_se(hashmap_get(c->dependencies[UNIT_RELOAD_PROPAGATED_FROM], a));
assert_se(!hashmap_get(unit_get_dependencies(a, UNIT_PROPAGATES_RELOAD_TO), b));
assert_se(!hashmap_get(unit_get_dependencies(b, UNIT_RELOAD_PROPAGATED_FROM), a));
assert_se(hashmap_get(unit_get_dependencies(a, UNIT_PROPAGATES_RELOAD_TO), c));
assert_se(hashmap_get(unit_get_dependencies(c, UNIT_RELOAD_PROPAGATED_FROM), a));
unit_remove_dependencies(a, UNIT_DEPENDENCY_PROC_SWAP);
assert_se(!hashmap_get(a->dependencies[UNIT_PROPAGATES_RELOAD_TO], b));
assert_se(!hashmap_get(b->dependencies[UNIT_RELOAD_PROPAGATED_FROM], a));
assert_se(!hashmap_get(a->dependencies[UNIT_PROPAGATES_RELOAD_TO], c));
assert_se(!hashmap_get(c->dependencies[UNIT_RELOAD_PROPAGATED_FROM], a));
assert_se(!hashmap_get(unit_get_dependencies(a, UNIT_PROPAGATES_RELOAD_TO), b));
assert_se(!hashmap_get(unit_get_dependencies(b, UNIT_RELOAD_PROPAGATED_FROM), a));
assert_se(!hashmap_get(unit_get_dependencies(a, UNIT_PROPAGATES_RELOAD_TO), c));
assert_se(!hashmap_get(unit_get_dependencies(c, UNIT_RELOAD_PROPAGATED_FROM), a));
assert_se(manager_load_unit(m, "unit-with-multiple-dashes.service", NULL, NULL, &unit_with_multiple_dashes) >= 0);
assert_se(strv_equal(unit_with_multiple_dashes->documentation, STRV_MAKE("man:test", "man:override2", "man:override3")));
assert_se(streq_ptr(unit_with_multiple_dashes->description, "override4"));
/* Now merge a synthetic unit into the existing one */
assert_se(unit_new_for_name(m, sizeof(Service), "merged.service", &stub) >= 0);
assert_se(unit_add_dependency_by_name(stub, UNIT_AFTER, SPECIAL_BASIC_TARGET, true, UNIT_DEPENDENCY_FILE) >= 0);
assert_se(unit_add_dependency_by_name(stub, UNIT_AFTER, "quux.target", true, UNIT_DEPENDENCY_FILE) >= 0);
assert_se(unit_add_dependency_by_name(stub, UNIT_AFTER, SPECIAL_ROOT_SLICE, true, UNIT_DEPENDENCY_FILE) >= 0);
assert_se(unit_add_dependency_by_name(stub, UNIT_REQUIRES, "non-existing.mount", true, UNIT_DEPENDENCY_FILE) >= 0);
assert_se(unit_add_dependency_by_name(stub, UNIT_ON_FAILURE, "non-existing-on-failure.target", true, UNIT_DEPENDENCY_FILE) >= 0);
log_info("/* Merging a+stub, dumps before */");
unit_dump(a, stderr, NULL);
unit_dump(stub, stderr, NULL);
assert_se(unit_merge(a, stub) >= 0);
log_info("/* Dump of merged a+stub */");
unit_dump(a, stderr, NULL);
assert_se(unit_has_dependency(a, UNIT_ATOM_AFTER, manager_get_unit(m, SPECIAL_BASIC_TARGET)));
assert_se(unit_has_dependency(a, UNIT_ATOM_AFTER, manager_get_unit(m, "quux.target")));
assert_se(unit_has_dependency(a, UNIT_ATOM_AFTER, manager_get_unit(m, SPECIAL_ROOT_SLICE)));
assert_se(unit_has_dependency(a, UNIT_ATOM_PULL_IN_START, manager_get_unit(m, "non-existing.mount")));
assert_se(unit_has_dependency(a, UNIT_ATOM_RETROACTIVE_START_REPLACE, manager_get_unit(m, "non-existing.mount")));
assert_se(unit_has_dependency(a, UNIT_ATOM_ON_FAILURE, manager_get_unit(m, "non-existing-on-failure.target")));
assert_se(!unit_has_dependency(a, UNIT_ATOM_ON_FAILURE, manager_get_unit(m, "basic.target")));
assert_se(!unit_has_dependency(a, UNIT_ATOM_PROPAGATES_RELOAD_TO, manager_get_unit(m, "non-existing-on-failure.target")));
assert_se(unit_has_name(a, "a.service"));
assert_se(unit_has_name(a, "merged.service"));
unsigned mm = 1;
Unit *other;
UNIT_FOREACH_DEPENDENCY(other, a, UNIT_ATOM_AFTER) {
mm *= unit_has_name(other, SPECIAL_BASIC_TARGET) ? 3 : 1;
mm *= unit_has_name(other, "quux.target") ? 5 : 1;
mm *= unit_has_name(other, SPECIAL_ROOT_SLICE) ? 7 : 1;
}
UNIT_FOREACH_DEPENDENCY(other, a, UNIT_ATOM_ON_FAILURE)
mm *= unit_has_name(other, "non-existing-on-failure.target") ? 11 : 1;
UNIT_FOREACH_DEPENDENCY(other, a, UNIT_ATOM_PULL_IN_START)
mm *= unit_has_name(other, "non-existing.mount") ? 13 : 1;
assert_se(mm == 3U*5U*7U*11U*13U);
verify_dependency_atoms();
return 0;
}

34
src/test/test-tpm2.c Normal file
View File

@ -0,0 +1,34 @@
/* SPDX-License-Identifier: LGPL-2.1-or-later */
#include "tpm2-util.h"
#include "tests.h"
static void test_tpm2_parse_pcrs(const char *s, uint32_t mask, int ret) {
uint32_t m;
assert_se(tpm2_parse_pcrs(s, &m) == ret);
if (ret >= 0)
assert_se(m == mask);
}
int main(int argc, char *argv[]) {
test_setup_logging(LOG_DEBUG);
test_tpm2_parse_pcrs("", 0, 0);
test_tpm2_parse_pcrs("0", 1, 0);
test_tpm2_parse_pcrs("1", 2, 0);
test_tpm2_parse_pcrs("0,1", 3, 0);
test_tpm2_parse_pcrs("0+1", 3, 0);
test_tpm2_parse_pcrs("0-1", 0, -EINVAL);
test_tpm2_parse_pcrs("0,1,2", 7, 0);
test_tpm2_parse_pcrs("0+1+2", 7, 0);
test_tpm2_parse_pcrs("0+1,2", 7, 0);
test_tpm2_parse_pcrs("0,1+2", 7, 0);
test_tpm2_parse_pcrs("0,2", 5, 0);
test_tpm2_parse_pcrs("0+2", 5, 0);
test_tpm2_parse_pcrs("foo", 0, -EINVAL);
return 0;
}

View File

@ -0,0 +1 @@
../TEST-01-BASIC/Makefile

View File

@ -0,0 +1,7 @@
#!/usr/bin/env bash
set -e
TEST_DESCRIPTION="test OnSuccess= + Uphold= + PropagatesStopTo= + BindsTo="
. $TEST_BASE_DIR/test-functions
do_test "$@" 57

View File

@ -76,10 +76,13 @@ JoinsNamespaceOf=
OnFailure=
OnFailureIsolate=
OnFailureJobMode=
OnSuccess=
OnSuccessJobMode=
PartOf=
PropagateReloadFrom=
PropagateReloadTo=
PropagatesReloadTo=
PropagatesStopTo=
RebootArgument=
RefuseManualStart=
RefuseManualStop=
@ -97,8 +100,10 @@ StartLimitBurst=
StartLimitInterval=
StartLimitIntervalSec=
StopWhenUnneeded=
StopPropagatedFrom=
SuccessAction=
SuccessActionExitStatus=
Upholds=
Wants=
[Install]
Alias=

View File

@ -0,0 +1,8 @@
[Unit]
Description=Unit with BindsTo=
BindsTo=testsuite-57-bound-by.service
After=testsuite-57-bound-by.service
[Service]
ExecStart=/bin/sleep infinity
ExecStopPost=systemctl kill --kill-who=main -sRTMIN+1 testsuite-57.service

View File

@ -0,0 +1,5 @@
[Unit]
Description=Unit with BoundBy=
[Service]
ExecStart=/bin/sleep 0.7

View File

@ -0,0 +1,6 @@
[Unit]
Description=Failing unit
OnFailure=testsuite-57-uphold.service
[Service]
ExecStart=/bin/false

View File

@ -0,0 +1,9 @@
[Unit]
Description=Stop Propagation Receiver
Wants=testsuite-57-prop-stop-two.service
After=testsuite-57-prop-stop-two.service
StopPropagatedFrom=testsuite-57-prop-stop-two.service
[Service]
ExecStart=/bin/sleep infinity
ExecStopPost=systemctl kill --kill-who=main -sUSR2 testsuite-57.service

View File

@ -0,0 +1,5 @@
[Unit]
Description=Stop Propagation Sender
[Service]
ExecStart=/bin/sleep 1.5

View File

@ -0,0 +1,10 @@
[Unit]
Description=Shortlived Unit
StopWhenUnneeded=yes
# Bump up the start limit logic, so that we can be restarted frequently enough
StartLimitBurst=15
StartLimitIntervalSec=1h
[Service]
ExecStart=/usr/lib/systemd/tests/testdata/units/testsuite-57-short-lived.sh

View File

@ -0,0 +1,18 @@
#!/usr/bin/env bash
set -ex
if [ -f /tmp/testsuite-57.counter ] ; then
read -r counter < /tmp/testsuite-57.counter
counter=$(("$counter" + 1))
else
counter=0
fi
echo "$counter" > /tmp/testsuite-57.counter
if [ "$counter" -eq 5 ] ; then
systemctl kill --kill-who=main -sUSR1 testsuite-57.service
fi
exec sleep 1.5

View File

@ -0,0 +1,6 @@
[Unit]
Description=Succeeding unit
OnSuccess=testsuite-57-fail.service
[Service]
ExecStart=/bin/true

View File

@ -0,0 +1,6 @@
[Unit]
Description=Upholding Unit
Upholds=testsuite-57-short-lived.service
[Service]
ExecStart=/bin/sleep infinity

View File

@ -0,0 +1,7 @@
[Unit]
Description=TEST-57-ONSUCCESS-UPHOLD
[Service]
ExecStartPre=rm -f /failed /testok
ExecStart=/usr/lib/systemd/tests/testdata/units/%N.sh
Type=oneshot

68
test/units/testsuite-57.sh Executable file
View File

@ -0,0 +1,68 @@
#!/usr/bin/env bash
set -eux
set -o pipefail
systemd-analyze log-level debug
systemd-analyze log-target journal
# Idea is this:
# 1. we start testsuite-57-success.service
# 2. which through OnSuccess= starts testsuite-57-fail.service,
# 3. which through OnFailure= starts testsuite-57-uphold.service,
# 4. which through Uphold= starts/keeps testsuite-57-short-lived.service running,
# 5. which will sleep 1s when invoked, and on the 5th invocation send us a SIGUSR1
# 6. once we got that we finish cleanly
sigusr1=0
trap sigusr1=1 SIGUSR1
systemctl start testsuite-57-success.service
while [ "$sigusr1" -eq 0 ] ; do
sleep .5
done
systemctl stop testsuite-57-uphold.service
# Idea is this:
# 1. we start testsuite-57-prop-stop-one.service
# 2. which through Wants=/After= pulls in testsuite-57-prop-stop-two.service as well
# 3. testsuite-57-prop-stop-one.service then sleeps indefinitely
# 4. testsuite-57-prop-stop-two.service sleeps a short time and exits
# 5. the StopPropagatedFrom= dependency between the two should ensure *both* will exit as result
# 6. an ExecStopPost= line on testsuite-57-prop-stop-one.service will send us a SIGUSR2
# 7. once we got that we finish cleanly
sigusr2=0
trap sigusr2=1 SIGUSR2
systemctl start testsuite-57-prop-stop-one.service
while [ "$sigusr2" -eq 0 ] ; do
sleep .5
done
# Idea is this:
# 1. we start testsuite-57-binds-to.service
# 2. which through BindsTo=/After= pulls in testsuite-57-bound-by.service as well
# 3. testsuite-57-bound-by.service suddenly dies
# 4. testsuite-57-binds-to.service should then also be pulled down (it otherwise just hangs)
# 6. an ExecStopPost= line on testsuite-57-binds-to.service will send us a SIGRTMIN1+1
# 7. once we got that we finish cleanly
sigrtmin1=0
trap sigrtmin1=1 SIGRTMIN+1
systemctl start testsuite-57-binds-to.service
while [ "$sigrtmin1" -eq 0 ] ; do
sleep .5
done
systemd-analyze log-level info
echo OK >/testok
exit 0