Compare commits

...

16 Commits

Author SHA1 Message Date
Lennart Poettering d8874c5e6c
Merge 8f9cae00f5 into ae04218383 2025-04-18 16:52:18 +08:00
Marcos Alano ae04218383
hwdb: add G-Mode key support (#37175)
Add G-Mode key, usually Fn+F9.

Closes #30824
2025-04-18 17:43:26 +09:00
Yu Watanabe 2a6ca54154
hashmap: kill hashmap_free_with_destructor() and friends (#37111)
Now destructor is always set in hash_ops when necessary. Hence,
hashmap_free_with_destructor() and friends are not necessary anymore.
Let's kill them.
2025-04-18 17:40:51 +09:00
Yu Watanabe 39dd06dbc4 meson: build tests for nspawn even -Dnspawn= is disabled
Follow-up for d95818f522.
Fixes #36880.
2025-04-18 09:03:33 +02:00
Zbigniew Jędrzejewski-Szmek a30684b983
udev: several follow-ups for recent change about listening fds (#37162) 2025-04-18 08:48:08 +02:00
Yu Watanabe bdf4f200fd network: update comment as hashmap_free_with_destructor() does not exist anymore 2025-04-18 09:16:44 +09:00
Yu Watanabe 4cbc25ab4c hashmap: drop hashmap_free_with_destructor() and friends 2025-04-18 09:16:44 +09:00
Yu Watanabe 885001ed5d hashmap: drop unused free func arguments in hashmap_free() and hashmap_clear() 2025-04-18 09:16:44 +09:00
Yu Watanabe 2d4c4d9e10 set: drop unused set_free_free() 2025-04-18 09:16:44 +09:00
Yu Watanabe 828513ee3e test: make the copied set not take the ownership of elements 2025-04-18 09:16:44 +09:00
Yu Watanabe b0a2d49b61 test: use string_hash_ops_free 2025-04-18 09:16:44 +09:00
Yu Watanabe f6a2a9ba93 daemon-util: remove existing fds with the same name from fdstore
Currently, all use cases of notify_push_fd()/notify_push_fdf()
assume that the name of each fd in the fdstore is unique.
For safety, let's remove the existing fds before pushing a new one
to avoid multiple fds with the same name stored in the fdstore.
2025-04-18 09:12:43 +09:00
Yu Watanabe 1785961660 udev: re-add unintentionally dropped error log
Follow-up for 9b6bf4e10e.
2025-04-18 09:06:09 +09:00
Lennart Poettering 8f9cae00f5 update TODO 2025-04-17 18:35:15 +02:00
Lennart Poettering 859bfcf6af test: add integration test for concurrency limits 2025-04-17 18:35:15 +02:00
Lennart Poettering 79be1cf55a pid1: add a concurrency limit to slice units
Fixes: #35862
2025-04-17 18:35:02 +02:00
34 changed files with 495 additions and 166 deletions

13
TODO
View File

@ -400,19 +400,6 @@ Features:
* in pid1: include ExecStart= cmdlines (and other Exec*= cmdlines) in polkit * in pid1: include ExecStart= cmdlines (and other Exec*= cmdlines) in polkit
request, so that policies can match against command lines. request, so that policies can match against command lines.
* account number of units currently in activating/active/deactivating state in
each slice, and expose this as a property of the slice, given this is a key
metric of the resource management entity that a slice is. (maybe add a 2nd
metric for units assigned to the slice, that also includes those with pending
jobs)
* maybe allow putting a "soft" limit on the number concurrently active units in
a slice, as per the previous item. When the limit is reached delay further
activations until number is below the threshold again, leaving the unit in
the job queue. Thus, multiple resource intensive tasks can be scheduled as
units in the same slice and they will be executed with an upper limit on
concurrently running tasks.
* importd: introduce a per-user instance, that downloads into per-user DDI dirs * importd: introduce a per-user instance, that downloads into per-user DDI dirs
* sysupdated: similar * sysupdated: similar

View File

@ -383,6 +383,7 @@ evdev:name:gpio-keys:phys:gpio-keys/input0:ev:3:dmi:bvn*:bvr*:bd*:svncube:pni1-T
########################################################### ###########################################################
evdev:atkbd:dmi:bvn*:bvr*:bd*:svnDell*:pn*:* evdev:atkbd:dmi:bvn*:bvr*:bd*:svnDell*:pn*:*
KEYBOARD_KEY_68=prog2 # G-Mode (Dell-specific)
KEYBOARD_KEY_81=playpause # Play/Pause KEYBOARD_KEY_81=playpause # Play/Pause
KEYBOARD_KEY_82=stopcd # Stop KEYBOARD_KEY_82=stopcd # Stop
KEYBOARD_KEY_83=previoussong # Previous song KEYBOARD_KEY_83=previoussong # Previous song

View File

@ -10662,6 +10662,12 @@ node /org/freedesktop/systemd1/unit/system_2eslice {
RemoveSubgroup(in s subcgroup, RemoveSubgroup(in s subcgroup,
in t flags); in t flags);
properties: properties:
@org.freedesktop.DBus.Property.EmitsChangedSignal("false")
readonly u ConcurrencyHardMax = ...;
@org.freedesktop.DBus.Property.EmitsChangedSignal("false")
readonly u ConcurrencySoftMax = ...;
@org.freedesktop.DBus.Property.EmitsChangedSignal("false")
readonly u NCurrentlyActive = ...;
@org.freedesktop.DBus.Property.EmitsChangedSignal("false") @org.freedesktop.DBus.Property.EmitsChangedSignal("false")
readonly s Slice = '...'; readonly s Slice = '...';
@org.freedesktop.DBus.Property.EmitsChangedSignal("false") @org.freedesktop.DBus.Property.EmitsChangedSignal("false")
@ -10844,6 +10850,12 @@ node /org/freedesktop/systemd1/unit/system_2eslice {
<!--method RemoveSubgroup is not documented!--> <!--method RemoveSubgroup is not documented!-->
<!--property ConcurrencyHardMax is not documented!-->
<!--property ConcurrencySoftMax is not documented!-->
<!--property NCurrentlyActive is not documented!-->
<!--property Slice is not documented!--> <!--property Slice is not documented!-->
<!--property ControlGroupId is not documented!--> <!--property ControlGroupId is not documented!-->
@ -11020,6 +11032,12 @@ node /org/freedesktop/systemd1/unit/system_2eslice {
<variablelist class="dbus-method" generated="True" extra-ref="RemoveSubgroup()"/> <variablelist class="dbus-method" generated="True" extra-ref="RemoveSubgroup()"/>
<variablelist class="dbus-property" generated="True" extra-ref="ConcurrencyHardMax"/>
<variablelist class="dbus-property" generated="True" extra-ref="ConcurrencySoftMax"/>
<variablelist class="dbus-property" generated="True" extra-ref="NCurrentlyActive"/>
<variablelist class="dbus-property" generated="True" extra-ref="Slice"/> <variablelist class="dbus-property" generated="True" extra-ref="Slice"/>
<variablelist class="dbus-property" generated="True" extra-ref="ControlGroup"/> <variablelist class="dbus-property" generated="True" extra-ref="ControlGroup"/>
@ -12241,7 +12259,10 @@ $ gdbus introspect --system --dest org.freedesktop.systemd1 \
<varname>EffectiveTasksMax</varname>, and <varname>EffectiveTasksMax</varname>, and
<varname>MemoryZSwapWriteback</varname> were added in version 256.</para> <varname>MemoryZSwapWriteback</varname> were added in version 256.</para>
<para><varname>ManagedOOMMemoryPressureDurationUSec</varname> was added in version 257.</para> <para><varname>ManagedOOMMemoryPressureDurationUSec</varname> was added in version 257.</para>
<para><function>RemoveSubgroup()</function> was added in version 258.</para> <para><varname>ConcurrencyHardMax</varname>,
<varname>ConcurrencySoftMax</varname>,
<varname>NCurrentlyActive</varname> and
<function>RemoveSubgroup()</function> were added in version 258.</para>
</refsect2> </refsect2>
<refsect2> <refsect2>
<title>Scope Unit Objects</title> <title>Scope Unit Objects</title>

View File

@ -3,7 +3,7 @@
"http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd"> "http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd">
<!-- SPDX-License-Identifier: LGPL-2.1-or-later --> <!-- SPDX-License-Identifier: LGPL-2.1-or-later -->
<refentry id="systemd.slice"> <refentry id="systemd.slice" xmlns:xi="http://www.w3.org/2001/XInclude">
<refentryinfo> <refentryinfo>
<title>systemd.slice</title> <title>systemd.slice</title>
<productname>systemd</productname> <productname>systemd</productname>
@ -104,9 +104,43 @@
<para>Slice unit files may include [Unit] and [Install] sections, which are described in <para>Slice unit files may include [Unit] and [Install] sections, which are described in
<citerefentry><refentrytitle>systemd.unit</refentrytitle><manvolnum>5</manvolnum></citerefentry>.</para> <citerefentry><refentrytitle>systemd.unit</refentrytitle><manvolnum>5</manvolnum></citerefentry>.</para>
<para>Slice files may include a [Slice] section. Options that may be used in this section are shared with other unit types. <para>Slice files may include a [Slice] section. Many options that may be used in this section are shared
These options are documented in with other unit types. These options are documented in
<citerefentry><refentrytitle>systemd.resource-control</refentrytitle><manvolnum>5</manvolnum></citerefentry>.</para> <citerefentry><refentrytitle>systemd.resource-control</refentrytitle><manvolnum>5</manvolnum></citerefentry>.</para>
<para>The options specific to the [Slice] section of slice units are the following:</para>
<variablelist class='unit-directives'>
<varlistentry>
<term><varname>ConcurrencyHardMax=</varname></term>
<term><varname>ConcurrencySoftMax=</varname></term>
<listitem><para>Configures a hard and a soft limit on the maximum number of units assigned to this
slice (or any descendent slices) that may be active at the same time. If the hard limit is reached no
further units associated with the slice may be activated, and their activation will fail with an
error. If the soft limit is reached any further requested activation of units will be queued, but no
immediate error is generated. The queued activation job will remain queued until the number of
concurrent active units within the slice is below the limit again.</para>
<para>If the special value <literal>infinity</literal> is specified, no concurrency limit is
enforced. This is the default.</para>
<para>Note that if multiple start jobs are queued for units, and all their dependencies are fulfilled
they'll be processed in an order that is dependent on the unit type, the CPU weight (for unit types
that know the concept, such as services), the nice level (similar), and finally in alphabetical order
by the unit name. This may be used to influence dispatching order when using
<varname>ConcurrencySoftMax=</varname> to pace concurrency within a slice unit.</para>
<para>Note that these options have a hierarchial effect: a limit set for a slice unit will apply to
both the units immediately within the slice, but also all units further down the slice tree. Also
note that each sub-slice unit counts as one unit each too, and thus when choosing a limit for a slice
hierarchy the limit must provide room for both the payload units (i.e. services, mounts, …) and
structual units (i.e. slice units), if any are defined.</para>
<xi:include href="version-info.xml" xpointer="v258"/></listitem>
</varlistentry>
</variablelist>
</refsect1> </refsect1>
<refsect1> <refsect1>

View File

@ -912,24 +912,20 @@ static void hashmap_free_no_clear(HashmapBase *h) {
free(h); free(h);
} }
HashmapBase* _hashmap_free(HashmapBase *h, free_func_t default_free_key, free_func_t default_free_value) { HashmapBase* _hashmap_free(HashmapBase *h) {
if (h) { if (h) {
_hashmap_clear(h, default_free_key, default_free_value); _hashmap_clear(h);
hashmap_free_no_clear(h); hashmap_free_no_clear(h);
} }
return NULL; return NULL;
} }
void _hashmap_clear(HashmapBase *h, free_func_t default_free_key, free_func_t default_free_value) { void _hashmap_clear(HashmapBase *h) {
free_func_t free_key, free_value;
if (!h) if (!h)
return; return;
free_key = h->hash_ops->free_key ?: default_free_key; if (h->hash_ops->free_key || h->hash_ops->free_value) {
free_value = h->hash_ops->free_value ?: default_free_value;
if (free_key || free_value) {
/* If destructor calls are defined, let's destroy things defensively: let's take the item out of the /* If destructor calls are defined, let's destroy things defensively: let's take the item out of the
* hash table, and only then call the destructor functions. If these destructors then try to unregister * hash table, and only then call the destructor functions. If these destructors then try to unregister
@ -941,11 +937,11 @@ void _hashmap_clear(HashmapBase *h, free_func_t default_free_key, free_func_t de
v = _hashmap_first_key_and_value(h, true, &k); v = _hashmap_first_key_and_value(h, true, &k);
if (free_key) if (h->hash_ops->free_key)
free_key(k); h->hash_ops->free_key(k);
if (free_value) if (h->hash_ops->free_value)
free_value(v); h->hash_ops->free_value(v);
} }
} }
@ -1780,7 +1776,7 @@ HashmapBase* _hashmap_copy(HashmapBase *h HASHMAP_DEBUG_PARAMS) {
} }
if (r < 0) if (r < 0)
return _hashmap_free(copy, NULL, NULL); return _hashmap_free(copy);
return copy; return copy;
} }

View File

@ -93,12 +93,12 @@ OrderedHashmap* _ordered_hashmap_new(const struct hash_ops *hash_ops HASHMAP_DE
#define ordered_hashmap_free_and_replace(a, b) \ #define ordered_hashmap_free_and_replace(a, b) \
free_and_replace_full(a, b, ordered_hashmap_free) free_and_replace_full(a, b, ordered_hashmap_free)
HashmapBase* _hashmap_free(HashmapBase *h, free_func_t default_free_key, free_func_t default_free_value); HashmapBase* _hashmap_free(HashmapBase *h);
static inline Hashmap* hashmap_free(Hashmap *h) { static inline Hashmap* hashmap_free(Hashmap *h) {
return (void*) _hashmap_free(HASHMAP_BASE(h), NULL, NULL); return (void*) _hashmap_free(HASHMAP_BASE(h));
} }
static inline OrderedHashmap* ordered_hashmap_free(OrderedHashmap *h) { static inline OrderedHashmap* ordered_hashmap_free(OrderedHashmap *h) {
return (void*) _hashmap_free(HASHMAP_BASE(h), NULL, NULL); return (void*) _hashmap_free(HASHMAP_BASE(h));
} }
IteratedCache* iterated_cache_free(IteratedCache *cache); IteratedCache* iterated_cache_free(IteratedCache *cache);
@ -266,12 +266,12 @@ static inline bool ordered_hashmap_iterate(OrderedHashmap *h, Iterator *i, void
return _hashmap_iterate(HASHMAP_BASE(h), i, value, key); return _hashmap_iterate(HASHMAP_BASE(h), i, value, key);
} }
void _hashmap_clear(HashmapBase *h, free_func_t default_free_key, free_func_t default_free_value); void _hashmap_clear(HashmapBase *h);
static inline void hashmap_clear(Hashmap *h) { static inline void hashmap_clear(Hashmap *h) {
_hashmap_clear(HASHMAP_BASE(h), NULL, NULL); _hashmap_clear(HASHMAP_BASE(h));
} }
static inline void ordered_hashmap_clear(OrderedHashmap *h) { static inline void ordered_hashmap_clear(OrderedHashmap *h) {
_hashmap_clear(HASHMAP_BASE(h), NULL, NULL); _hashmap_clear(HASHMAP_BASE(h));
} }
/* /*
@ -331,27 +331,6 @@ static inline void *ordered_hashmap_first_key(OrderedHashmap *h) {
return _hashmap_first_key(HASHMAP_BASE(h), false); return _hashmap_first_key(HASHMAP_BASE(h), false);
} }
#define hashmap_clear_with_destructor(h, f) \
({ \
Hashmap *_h = (h); \
void *_item; \
while ((_item = hashmap_steal_first(_h))) \
f(_item); \
_h; \
})
#define hashmap_free_with_destructor(h, f) \
hashmap_free(hashmap_clear_with_destructor(h, f))
#define ordered_hashmap_clear_with_destructor(h, f) \
({ \
OrderedHashmap *_h = (h); \
void *_item; \
while ((_item = ordered_hashmap_steal_first(_h))) \
f(_item); \
_h; \
})
#define ordered_hashmap_free_with_destructor(h, f) \
ordered_hashmap_free(ordered_hashmap_clear_with_destructor(h, f))
/* no hashmap_next */ /* no hashmap_next */
void* ordered_hashmap_next(OrderedHashmap *h, const void *key); void* ordered_hashmap_next(OrderedHashmap *h, const void *key);

View File

@ -83,17 +83,6 @@ void ordered_set_print(FILE *f, const char *field, OrderedSet *s);
#define ORDERED_SET_FOREACH(e, s) \ #define ORDERED_SET_FOREACH(e, s) \
_ORDERED_SET_FOREACH(e, s, UNIQ_T(i, UNIQ)) _ORDERED_SET_FOREACH(e, s, UNIQ_T(i, UNIQ))
#define ordered_set_clear_with_destructor(s, f) \
({ \
OrderedSet *_s = (s); \
void *_item; \
while ((_item = ordered_set_steal_first(_s))) \
f(_item); \
_s; \
})
#define ordered_set_free_with_destructor(s, f) \
ordered_set_free(ordered_set_clear_with_destructor(s, f))
DEFINE_TRIVIAL_CLEANUP_FUNC(OrderedSet*, ordered_set_free); DEFINE_TRIVIAL_CLEANUP_FUNC(OrderedSet*, ordered_set_free);
#define _cleanup_ordered_set_free_ _cleanup_(ordered_set_freep) #define _cleanup_ordered_set_free_ _cleanup_(ordered_set_freep)

View File

@ -12,15 +12,9 @@ Set* _set_new(const struct hash_ops *hash_ops HASHMAP_DEBUG_PARAMS);
#define set_new(ops) _set_new(ops HASHMAP_DEBUG_SRC_ARGS) #define set_new(ops) _set_new(ops HASHMAP_DEBUG_SRC_ARGS)
static inline Set* set_free(Set *s) { static inline Set* set_free(Set *s) {
return (Set*) _hashmap_free(HASHMAP_BASE(s), NULL, NULL); return (Set*) _hashmap_free(HASHMAP_BASE(s));
} }
static inline Set* set_free_free(Set *s) {
return (Set*) _hashmap_free(HASHMAP_BASE(s), free, NULL);
}
/* no set_free_free_free */
#define set_copy(s) ((Set*) _hashmap_copy(HASHMAP_BASE(s) HASHMAP_DEBUG_SRC_ARGS)) #define set_copy(s) ((Set*) _hashmap_copy(HASHMAP_BASE(s) HASHMAP_DEBUG_SRC_ARGS))
int _set_ensure_allocated(Set **s, const struct hash_ops *hash_ops HASHMAP_DEBUG_PARAMS); int _set_ensure_allocated(Set **s, const struct hash_ops *hash_ops HASHMAP_DEBUG_PARAMS);
@ -77,30 +71,13 @@ static inline bool set_iterate(const Set *s, Iterator *i, void **value) {
} }
static inline void set_clear(Set *s) { static inline void set_clear(Set *s) {
_hashmap_clear(HASHMAP_BASE(s), NULL, NULL); _hashmap_clear(HASHMAP_BASE(s));
} }
static inline void set_clear_free(Set *s) {
_hashmap_clear(HASHMAP_BASE(s), free, NULL);
}
/* no set_clear_free_free */
static inline void *set_steal_first(Set *s) { static inline void *set_steal_first(Set *s) {
return _hashmap_first_key_and_value(HASHMAP_BASE(s), true, NULL); return _hashmap_first_key_and_value(HASHMAP_BASE(s), true, NULL);
} }
#define set_clear_with_destructor(s, f) \
({ \
Set *_s = (s); \
void *_item; \
while ((_item = set_steal_first(_s))) \
f(_item); \
_s; \
})
#define set_free_with_destructor(s, f) \
set_free(set_clear_with_destructor(s, f))
/* no set_steal_first_key */ /* no set_steal_first_key */
/* no set_first_key */ /* no set_first_key */
@ -145,10 +122,8 @@ int set_put_strsplit(Set *s, const char *v, const char *separators, ExtractFlags
for (; ({ e = set_first(s); assert_se(!e || set_move_one(d, s, e) >= 0); e; }); ) for (; ({ e = set_first(s); assert_se(!e || set_move_one(d, s, e) >= 0); e; }); )
DEFINE_TRIVIAL_CLEANUP_FUNC(Set*, set_free); DEFINE_TRIVIAL_CLEANUP_FUNC(Set*, set_free);
DEFINE_TRIVIAL_CLEANUP_FUNC(Set*, set_free_free);
#define _cleanup_set_free_ _cleanup_(set_freep) #define _cleanup_set_free_ _cleanup_(set_freep)
#define _cleanup_set_free_free_ _cleanup_(set_free_freep)
int set_strjoin(Set *s, const char *separator, bool wrap_with_separator, char **ret); int set_strjoin(Set *s, const char *separator, bool wrap_with_separator, char **ret);

View File

@ -1,15 +1,64 @@
/* SPDX-License-Identifier: LGPL-2.1-or-later */ /* SPDX-License-Identifier: LGPL-2.1-or-later */
#include "bus-get-properties.h"
#include "dbus-cgroup.h" #include "dbus-cgroup.h"
#include "dbus-slice.h" #include "dbus-slice.h"
#include "dbus-util.h"
#include "slice.h" #include "slice.h"
#include "unit.h" #include "unit.h"
static int property_get_currently_active(
sd_bus *bus,
const char *path,
const char *interface,
const char *property,
sd_bus_message *reply,
void *userdata,
sd_bus_error *error) {
Slice *s = ASSERT_PTR(userdata);
assert(bus);
assert(reply);
return sd_bus_message_append(
reply,
"u",
(uint32_t) slice_get_currently_active(s, /* ignore= */ NULL, /* with_pending= */ false));
}
const sd_bus_vtable bus_slice_vtable[] = { const sd_bus_vtable bus_slice_vtable[] = {
SD_BUS_VTABLE_START(0), SD_BUS_VTABLE_START(0),
SD_BUS_PROPERTY("ConcurrencyHardMax", "u", bus_property_get_unsigned, offsetof(Slice, concurrency_hard_max), 0),
SD_BUS_PROPERTY("ConcurrencySoftMax", "u", bus_property_get_unsigned, offsetof(Slice, concurrency_soft_max), 0),
SD_BUS_PROPERTY("NCurrentlyActive", "u", property_get_currently_active, 0, 0),
SD_BUS_VTABLE_END SD_BUS_VTABLE_END
}; };
static int bus_slice_set_transient_property(
Slice *s,
const char *name,
sd_bus_message *message,
UnitWriteFlags flags,
sd_bus_error *error) {
Unit *u = UNIT(s);
assert(s);
assert(name);
assert(message);
flags |= UNIT_PRIVATE;
if (streq(name, "ConcurrencyHardMax"))
return bus_set_transient_unsigned(u, name, &s->concurrency_hard_max, message, flags, error);
if (streq(name, "ConcurrencySoftMax"))
return bus_set_transient_unsigned(u, name, &s->concurrency_soft_max, message, flags, error);
return 0;
}
int bus_slice_set_property( int bus_slice_set_property(
Unit *u, Unit *u,
const char *name, const char *name,
@ -18,11 +67,24 @@ int bus_slice_set_property(
sd_bus_error *error) { sd_bus_error *error) {
Slice *s = SLICE(u); Slice *s = SLICE(u);
int r;
assert(name); assert(name);
assert(u); assert(u);
return bus_cgroup_set_property(u, &s->cgroup_context, name, message, flags, error); r = bus_cgroup_set_property(u, &s->cgroup_context, name, message, flags, error);
if (r != 0)
return r;
if (u->transient && u->load_state == UNIT_STUB) {
/* This is a transient unit, let's allow a little more */
r = bus_slice_set_transient_property(s, name, message, flags, error);
if (r != 0)
return r;
}
return 0;
} }
int bus_slice_commit_properties(Unit *u) { int bus_slice_commit_properties(Unit *u) {

View File

@ -652,6 +652,7 @@ static const char* job_done_message_format(Unit *u, JobType t, JobResult result)
[JOB_COLLECTED] = "Unnecessary job was removed for %s.", [JOB_COLLECTED] = "Unnecessary job was removed for %s.",
[JOB_ONCE] = "Unit %s has been started before and cannot be started again.", [JOB_ONCE] = "Unit %s has been started before and cannot be started again.",
[JOB_FROZEN] = "Cannot start frozen unit %s.", [JOB_FROZEN] = "Cannot start frozen unit %s.",
[JOB_CONCURRENCY] = "Hard concurrency limit hit for slice of unit %s.",
}; };
static const char* const generic_finished_stop_job[_JOB_RESULT_MAX] = { static const char* const generic_finished_stop_job[_JOB_RESULT_MAX] = {
[JOB_DONE] = "Stopped %s.", [JOB_DONE] = "Stopped %s.",
@ -726,6 +727,7 @@ static const struct {
[JOB_COLLECTED] = { LOG_INFO, }, [JOB_COLLECTED] = { LOG_INFO, },
[JOB_ONCE] = { LOG_ERR, ANSI_HIGHLIGHT_RED, " ONCE " }, [JOB_ONCE] = { LOG_ERR, ANSI_HIGHLIGHT_RED, " ONCE " },
[JOB_FROZEN] = { LOG_ERR, ANSI_HIGHLIGHT_RED, "FROZEN" }, [JOB_FROZEN] = { LOG_ERR, ANSI_HIGHLIGHT_RED, "FROZEN" },
[JOB_CONCURRENCY] = { LOG_ERR, ANSI_HIGHLIGHT_RED, "CONCUR" },
}; };
static const char* job_done_mid(JobType type, JobResult result) { static const char* job_done_mid(JobType type, JobResult result) {
@ -978,6 +980,8 @@ int job_run_and_invalidate(Job *j) {
r = job_finish_and_invalidate(j, JOB_ONCE, true, false); r = job_finish_and_invalidate(j, JOB_ONCE, true, false);
else if (r == -EDEADLK) else if (r == -EDEADLK)
r = job_finish_and_invalidate(j, JOB_FROZEN, true, false); r = job_finish_and_invalidate(j, JOB_FROZEN, true, false);
else if (r == -ETOOMANYREFS)
r = job_finish_and_invalidate(j, JOB_CONCURRENCY, /* recursive= */ true, /* already= */ false);
else if (r < 0) else if (r < 0)
r = job_finish_and_invalidate(j, JOB_FAILED, true, false); r = job_finish_and_invalidate(j, JOB_FAILED, true, false);
} }
@ -1035,7 +1039,7 @@ int job_finish_and_invalidate(Job *j, JobResult result, bool recursive, bool alr
goto finish; goto finish;
} }
if (IN_SET(result, JOB_FAILED, JOB_INVALID, JOB_FROZEN)) if (IN_SET(result, JOB_FAILED, JOB_INVALID, JOB_FROZEN, JOB_CONCURRENCY))
j->manager->n_failed_jobs++; j->manager->n_failed_jobs++;
job_uninstall(j); job_uninstall(j);
@ -1667,6 +1671,7 @@ static const char* const job_result_table[_JOB_RESULT_MAX] = {
[JOB_COLLECTED] = "collected", [JOB_COLLECTED] = "collected",
[JOB_ONCE] = "once", [JOB_ONCE] = "once",
[JOB_FROZEN] = "frozen", [JOB_FROZEN] = "frozen",
[JOB_CONCURRENCY] = "concurrency",
}; };
DEFINE_STRING_TABLE_LOOKUP(job_result, JobResult); DEFINE_STRING_TABLE_LOOKUP(job_result, JobResult);

View File

@ -83,6 +83,7 @@ enum JobResult {
JOB_COLLECTED, /* Job was garbage collected, since nothing needed it anymore */ JOB_COLLECTED, /* Job was garbage collected, since nothing needed it anymore */
JOB_ONCE, /* Unit was started before, and hence can't be started again */ JOB_ONCE, /* Unit was started before, and hence can't be started again */
JOB_FROZEN, /* Unit is currently frozen, so we can't safely operate on it */ JOB_FROZEN, /* Unit is currently frozen, so we can't safely operate on it */
JOB_CONCURRENCY, /* Slice the unit is in has its hard concurrency limit reached */
_JOB_RESULT_MAX, _JOB_RESULT_MAX,
_JOB_RESULT_INVALID = -EINVAL, _JOB_RESULT_INVALID = -EINVAL,
}; };

View File

@ -592,6 +592,8 @@ Path.MakeDirectory, config_parse_bool,
Path.DirectoryMode, config_parse_mode, 0, offsetof(Path, directory_mode) Path.DirectoryMode, config_parse_mode, 0, offsetof(Path, directory_mode)
Path.TriggerLimitIntervalSec, config_parse_sec, 0, offsetof(Path, trigger_limit.interval) Path.TriggerLimitIntervalSec, config_parse_sec, 0, offsetof(Path, trigger_limit.interval)
Path.TriggerLimitBurst, config_parse_unsigned, 0, offsetof(Path, trigger_limit.burst) Path.TriggerLimitBurst, config_parse_unsigned, 0, offsetof(Path, trigger_limit.burst)
Slice.ConcurrencySoftMax, config_parse_concurrency_max, 0, offsetof(Slice, concurrency_soft_max)
Slice.ConcurrencyHardMax, config_parse_concurrency_max, 0, offsetof(Slice, concurrency_hard_max)
{{ CGROUP_CONTEXT_CONFIG_ITEMS('Slice') }} {{ CGROUP_CONTEXT_CONFIG_ITEMS('Slice') }}
{{ CGROUP_CONTEXT_CONFIG_ITEMS('Scope') }} {{ CGROUP_CONTEXT_CONFIG_ITEMS('Scope') }}
{{ KILL_CONTEXT_CONFIG_ITEMS('Scope') }} {{ KILL_CONTEXT_CONFIG_ITEMS('Scope') }}

View File

@ -5941,6 +5941,28 @@ int config_parse_mount_node(
return config_parse_string(unit, filename, line, section, section_line, lvalue, ltype, path, data, userdata); return config_parse_string(unit, filename, line, section, section_line, lvalue, ltype, path, data, userdata);
} }
int config_parse_concurrency_max(
const char *unit,
const char *filename,
unsigned line,
const char *section,
unsigned section_line,
const char *lvalue,
int ltype,
const char *rvalue,
void *data,
void *userdata) {
unsigned *concurrency_max = ASSERT_PTR(data);
if (isempty(rvalue) || streq(rvalue, "infinity")) {
*concurrency_max = UINT_MAX;
return 0;
}
return config_parse_unsigned(unit, filename, line, section, section_line, lvalue, ltype, rvalue, data, userdata);
}
static int merge_by_names(Unit *u, Set *names, const char *id) { static int merge_by_names(Unit *u, Set *names, const char *id) {
char *k; char *k;
int r; int r;

View File

@ -161,6 +161,7 @@ CONFIG_PARSER_PROTOTYPE(config_parse_open_file);
CONFIG_PARSER_PROTOTYPE(config_parse_memory_pressure_watch); CONFIG_PARSER_PROTOTYPE(config_parse_memory_pressure_watch);
CONFIG_PARSER_PROTOTYPE(config_parse_cgroup_nft_set); CONFIG_PARSER_PROTOTYPE(config_parse_cgroup_nft_set);
CONFIG_PARSER_PROTOTYPE(config_parse_mount_node); CONFIG_PARSER_PROTOTYPE(config_parse_mount_node);
CONFIG_PARSER_PROTOTYPE(config_parse_concurrency_max);
/* gperf prototypes */ /* gperf prototypes */
const struct ConfigPerfItem* load_fragment_gperf_lookup(const char *key, GPERF_LEN_TYPE length); const struct ConfigPerfItem* load_fragment_gperf_lookup(const char *key, GPERF_LEN_TYPE length);

View File

@ -21,10 +21,14 @@ static const UnitActiveState state_translation_table[_SLICE_STATE_MAX] = {
}; };
static void slice_init(Unit *u) { static void slice_init(Unit *u) {
Slice *s = SLICE(u);
assert(u); assert(u);
assert(u->load_state == UNIT_STUB); assert(u->load_state == UNIT_STUB);
u->ignore_on_isolate = true; u->ignore_on_isolate = true;
s->concurrency_hard_max = UINT_MAX;
s->concurrency_soft_max = UINT_MAX;
} }
static void slice_set_state(Slice *s, SliceState state) { static void slice_set_state(Slice *s, SliceState state) {
@ -385,6 +389,57 @@ static int slice_freezer_action(Unit *s, FreezerAction action) {
return unit_cgroup_freezer_action(s, action); return unit_cgroup_freezer_action(s, action);
} }
unsigned slice_get_currently_active(Slice *slice, Unit *ignore, bool with_pending) {
Unit *u = ASSERT_PTR(UNIT(slice));
/* If 'ignore' is non-NULL and a unit contained in this slice (or any below) we'll ignore it when
* counting. */
unsigned n = 0;
Unit *member;
UNIT_FOREACH_DEPENDENCY(member, u, UNIT_ATOM_SLICE_OF) {
if (member == ignore)
continue;
if (!UNIT_IS_INACTIVE_OR_FAILED(unit_active_state(member)) ||
(with_pending && member->job && IN_SET(member->job->type, JOB_START, JOB_RESTART, JOB_RELOAD)))
n++;
if (member->type == UNIT_SLICE)
n += slice_get_currently_active(SLICE(member), ignore, with_pending);
}
return n;
}
bool slice_concurrency_soft_max_reached(Slice *slice, Unit *ignore) {
assert(slice);
if (slice->concurrency_soft_max != UINT_MAX &&
slice_get_currently_active(slice, ignore, /* with_pending= */ false) >= slice->concurrency_soft_max)
return true;
Unit *parent = UNIT_GET_SLICE(UNIT(slice));
if (parent)
return slice_concurrency_soft_max_reached(SLICE(parent), ignore);
return false;
}
bool slice_concurrency_hard_max_reached(Slice *slice, Unit *ignore) {
assert(slice);
if (slice->concurrency_hard_max != UINT_MAX &&
slice_get_currently_active(slice, ignore, /* with_pending= */ true) >= slice->concurrency_hard_max)
return true;
Unit *parent = UNIT_GET_SLICE(UNIT(slice));
if (parent)
return slice_concurrency_hard_max_reached(SLICE(parent), ignore);
return false;
}
const UnitVTable slice_vtable = { const UnitVTable slice_vtable = {
.object_size = sizeof(Slice), .object_size = sizeof(Slice),
.cgroup_context_offset = offsetof(Slice, cgroup_context), .cgroup_context_offset = offsetof(Slice, cgroup_context),

View File

@ -10,6 +10,9 @@ struct Slice {
SliceState state, deserialized_state; SliceState state, deserialized_state;
unsigned concurrency_soft_max;
unsigned concurrency_hard_max;
CGroupContext cgroup_context; CGroupContext cgroup_context;
CGroupRuntime *cgroup_runtime; CGroupRuntime *cgroup_runtime;
@ -18,3 +21,8 @@ struct Slice {
extern const UnitVTable slice_vtable; extern const UnitVTable slice_vtable;
DEFINE_CAST(SLICE, Slice); DEFINE_CAST(SLICE, Slice);
unsigned slice_get_currently_active(Slice *slice, Unit *ignore, bool with_pending);
bool slice_concurrency_hard_max_reached(Slice *slice, Unit *ignore);
bool slice_concurrency_soft_max_reached(Slice *slice, Unit *ignore);

View File

@ -8,6 +8,7 @@
#include "bus-common-errors.h" #include "bus-common-errors.h"
#include "bus-error.h" #include "bus-error.h"
#include "dbus-unit.h" #include "dbus-unit.h"
#include "slice.h"
#include "strv.h" #include "strv.h"
#include "terminal-util.h" #include "terminal-util.h"
#include "transaction.h" #include "transaction.h"
@ -986,6 +987,16 @@ int transaction_add_job_and_dependencies(
"Job type %s is not applicable for unit %s.", "Job type %s is not applicable for unit %s.",
job_type_to_string(type), unit->id); job_type_to_string(type), unit->id);
if (type == JOB_START) {
/* The hard concurrency limit for slice units we already enforce when a job is enqueued */
Slice *slice = SLICE(UNIT_GET_SLICE(unit));
if (slice && slice_concurrency_hard_max_reached(slice, unit))
return sd_bus_error_setf(
e, BUS_ERROR_CONCURRENCY_LIMIT_REACHED,
"Concurrency limit of the slice unit '%s' (or any of its parents) the unit '%s' is contained in has been reached, refusing start job.",
UNIT(slice)->id, unit->id);
}
/* First add the job. */ /* First add the job. */
ret = transaction_add_one_job(tr, type, unit, &is_new); ret = transaction_add_one_job(tr, type, unit, &is_new);
if (!ret) if (!ret)

View File

@ -1852,19 +1852,21 @@ static bool unit_verify_deps(Unit *u) {
} }
/* Errors that aren't really errors: /* Errors that aren't really errors:
* -EALREADY: Unit is already started. * -EALREADY: Unit is already started.
* -ECOMM: Condition failed * -ECOMM: Condition failed
* -EAGAIN: An operation is already in progress. Retry later. * -EAGAIN: An operation is already in progress. Retry later.
* *
* Errors that are real errors: * Errors that are real errors:
* -EBADR: This unit type does not support starting. * -EBADR: This unit type does not support starting.
* -ECANCELED: Start limit hit, too many requests for now * -ECANCELED: Start limit hit, too many requests for now
* -EPROTO: Assert failed * -EPROTO: Assert failed
* -EINVAL: Unit not loaded * -EINVAL: Unit not loaded
* -EOPNOTSUPP: Unit type not supported * -EOPNOTSUPP: Unit type not supported
* -ENOLINK: The necessary dependencies are not fulfilled. * -ENOLINK: The necessary dependencies are not fulfilled.
* -ESTALE: This unit has been started before and can't be started a second time * -ESTALE: This unit has been started before and can't be started a second time
* -ENOENT: This is a triggering unit and unit to trigger is not loaded * -EDEADLK: This unit is frozen
* -ENOENT: This is a triggering unit and unit to trigger is not loaded
* -ETOOMANYREFS: The hard concurrency limit of at least one of the slices the unit is contained in has been reached
*/ */
int unit_start(Unit *u, ActivationDetails *details) { int unit_start(Unit *u, ActivationDetails *details) {
UnitActiveState state; UnitActiveState state;
@ -1946,6 +1948,24 @@ int unit_start(Unit *u, ActivationDetails *details) {
if (!UNIT_VTABLE(u)->start) if (!UNIT_VTABLE(u)->start)
return -EBADR; return -EBADR;
if (UNIT_IS_INACTIVE_OR_FAILED(state)) {
Slice *slice = SLICE(UNIT_GET_SLICE(u));
if (slice) {
/* Check hard concurrency limit. Note this is partially redundant, we already checked
* this when enqueuing jobs. However, between the time when we enqueued this and the
* time we are dispatching the queue the configuration might have changed, hence
* check here again */
if (slice_concurrency_hard_max_reached(slice, u))
return -ETOOMANYREFS;
/* Also check soft concurrenty limit, and return EAGAIN so that the job is kept in
* the queue */
if (slice_concurrency_soft_max_reached(slice, u))
return -EAGAIN; /* Try again, keep in queue */
}
}
/* We don't suppress calls to ->start() here when we are already starting, to allow this request to /* We don't suppress calls to ->start() here when we are already starting, to allow this request to
* be used as a "hurry up" call, for example when the unit is in some "auto restart" state where it * be used as a "hurry up" call, for example when the unit is in some "auto restart" state where it
* waits for a holdoff timer to elapse before it will start again. */ * waits for a holdoff timer to elapse before it will start again. */
@ -2601,6 +2621,45 @@ static bool unit_process_job(Job *j, UnitActiveState ns, bool reload_success) {
return unexpected; return unexpected;
} }
static void unit_recursive_add_to_run_queue(Unit *u) {
assert(u);
if (u->job)
job_add_to_run_queue(u->job);
Unit *child;
UNIT_FOREACH_DEPENDENCY(child, u, UNIT_ATOM_SLICE_OF) {
if (!child->job)
continue;
unit_recursive_add_to_run_queue(child);
}
}
static void unit_check_concurrency_limit(Unit *u) {
assert(u);
Unit *slice = UNIT_GET_SLICE(u);
if (!slice)
return;
/* If a unit was stopped, maybe it has pending siblings (or children thereof) that can be started now */
if (SLICE(slice)->concurrency_soft_max != UINT_MAX) {
Unit *sibling;
UNIT_FOREACH_DEPENDENCY(sibling, slice, UNIT_ATOM_SLICE_OF) {
if (sibling == u)
continue;
unit_recursive_add_to_run_queue(sibling);
}
}
/* Also go up the tree. */
unit_check_concurrency_limit(slice);
}
void unit_notify(Unit *u, UnitActiveState os, UnitActiveState ns, bool reload_success) { void unit_notify(Unit *u, UnitActiveState os, UnitActiveState ns, bool reload_success) {
assert(u); assert(u);
assert(os < _UNIT_ACTIVE_STATE_MAX); assert(os < _UNIT_ACTIVE_STATE_MAX);
@ -2735,6 +2794,9 @@ void unit_notify(Unit *u, UnitActiveState os, UnitActiveState ns, bool reload_su
/* Maybe we can release some resources now? */ /* Maybe we can release some resources now? */
unit_submit_to_release_resources_queue(u); unit_submit_to_release_resources_queue(u);
/* Maybe the concurrency limits now allow dispatching of another start job in this slice? */
unit_check_concurrency_limit(u);
} else if (UNIT_IS_ACTIVE_OR_RELOADING(ns)) { } else if (UNIT_IS_ACTIVE_OR_RELOADING(ns)) {
/* Start uphold units regardless if going up was expected or not */ /* Start uphold units regardless if going up was expected or not */
check_uphold_dependencies(u); check_uphold_dependencies(u);

View File

@ -27,6 +27,7 @@ BUS_ERROR_MAP_ELF_REGISTER const sd_bus_error_map bus_common_errors[] = {
SD_BUS_ERROR_MAP(BUS_ERROR_UNIT_GENERATED, EADDRNOTAVAIL), SD_BUS_ERROR_MAP(BUS_ERROR_UNIT_GENERATED, EADDRNOTAVAIL),
SD_BUS_ERROR_MAP(BUS_ERROR_UNIT_LINKED, ELOOP), SD_BUS_ERROR_MAP(BUS_ERROR_UNIT_LINKED, ELOOP),
SD_BUS_ERROR_MAP(BUS_ERROR_JOB_TYPE_NOT_APPLICABLE, EBADR), SD_BUS_ERROR_MAP(BUS_ERROR_JOB_TYPE_NOT_APPLICABLE, EBADR),
SD_BUS_ERROR_MAP(BUS_ERROR_CONCURRENCY_LIMIT_REACHED, ETOOMANYREFS),
SD_BUS_ERROR_MAP(BUS_ERROR_NO_ISOLATION, EPERM), SD_BUS_ERROR_MAP(BUS_ERROR_NO_ISOLATION, EPERM),
SD_BUS_ERROR_MAP(BUS_ERROR_SHUTTING_DOWN, ECANCELED), SD_BUS_ERROR_MAP(BUS_ERROR_SHUTTING_DOWN, ECANCELED),
SD_BUS_ERROR_MAP(BUS_ERROR_SCOPE_NOT_RUNNING, EHOSTDOWN), SD_BUS_ERROR_MAP(BUS_ERROR_SCOPE_NOT_RUNNING, EHOSTDOWN),

View File

@ -23,6 +23,7 @@
#define BUS_ERROR_UNIT_LINKED "org.freedesktop.systemd1.UnitLinked" #define BUS_ERROR_UNIT_LINKED "org.freedesktop.systemd1.UnitLinked"
#define BUS_ERROR_UNIT_BAD_PATH "org.freedesktop.systemd1.UnitBadPath" #define BUS_ERROR_UNIT_BAD_PATH "org.freedesktop.systemd1.UnitBadPath"
#define BUS_ERROR_JOB_TYPE_NOT_APPLICABLE "org.freedesktop.systemd1.JobTypeNotApplicable" #define BUS_ERROR_JOB_TYPE_NOT_APPLICABLE "org.freedesktop.systemd1.JobTypeNotApplicable"
#define BUS_ERROR_CONCURRENCY_LIMIT_REACHED "org.freedesktop.systemd1.ConcurrencyLimitReached"
#define BUS_ERROR_NO_ISOLATION "org.freedesktop.systemd1.NoIsolation" #define BUS_ERROR_NO_ISOLATION "org.freedesktop.systemd1.NoIsolation"
#define BUS_ERROR_SHUTTING_DOWN "org.freedesktop.systemd1.ShuttingDown" #define BUS_ERROR_SHUTTING_DOWN "org.freedesktop.systemd1.ShuttingDown"
#define BUS_ERROR_SCOPE_NOT_RUNNING "org.freedesktop.systemd1.ScopeNotRunning" #define BUS_ERROR_SCOPE_NOT_RUNNING "org.freedesktop.systemd1.ScopeNotRunning"

View File

@ -1163,8 +1163,7 @@ int netdev_reload(Manager *manager) {
} }
/* Detach old NetDev objects from Manager. /* Detach old NetDev objects from Manager.
* Note, the same object may be registered with multiple names, and netdev_detach() may drop multiple * The same object may be registered with multiple names, and netdev_detach() may drop multiple entries. */
* entries. Hence, hashmap_free_with_destructor() cannot be used. */
for (NetDev *n; (n = hashmap_first(manager->netdevs)); ) for (NetDev *n; (n = hashmap_first(manager->netdevs)); )
netdev_detach(n); netdev_detach(n);

View File

@ -683,8 +683,7 @@ Manager* manager_free(Manager *m) {
m->dhcp_pd_subnet_ids = set_free(m->dhcp_pd_subnet_ids); m->dhcp_pd_subnet_ids = set_free(m->dhcp_pd_subnet_ids);
m->networks = ordered_hashmap_free(m->networks); m->networks = ordered_hashmap_free(m->networks);
/* The same object may be registered with multiple names, and netdev_detach() may drop multiple /* The same object may be registered with multiple names, and netdev_detach() may drop multiple entries. */
* entries. Hence, hashmap_free_with_destructor() cannot be used. */
for (NetDev *n; (n = hashmap_first(m->netdevs)); ) for (NetDev *n; (n = hashmap_first(m->netdevs)); )
netdev_detach(n); netdev_detach(n);
m->netdevs = hashmap_free(m->netdevs); m->netdevs = hashmap_free(m->netdevs);

View File

@ -1,9 +1,5 @@
# SPDX-License-Identifier: LGPL-2.1-or-later # SPDX-License-Identifier: LGPL-2.1-or-later
if conf.get('ENABLE_NSPAWN') != 1
subdir_done()
endif
libnspawn_core_sources = files( libnspawn_core_sources = files(
'nspawn-bind-user.c', 'nspawn-bind-user.c',
'nspawn-cgroup.c', 'nspawn-cgroup.c',
@ -52,6 +48,7 @@ executables += [
executable_template + { executable_template + {
'name' : 'systemd-nspawn', 'name' : 'systemd-nspawn',
'public' : true, 'public' : true,
'conditions' : ['ENABLE_NSPAWN'],
'sources' : files('nspawn.c'), 'sources' : files('nspawn.c'),
'link_with' : nspawn_libs, 'link_with' : nspawn_libs,
'dependencies' : [ 'dependencies' : [

View File

@ -251,6 +251,8 @@ static int check_wait_response(BusWaitForJobs *d, WaitJobsFlags flags, const cha
log_error("Unit %s was started already once and can't be started again.", d->name); log_error("Unit %s was started already once and can't be started again.", d->name);
else if (streq(d->result, "frozen")) else if (streq(d->result, "frozen"))
log_error("Cannot perform operation on frozen unit %s.", d->name); log_error("Cannot perform operation on frozen unit %s.", d->name);
else if (streq(d->result, "concurrency"))
log_error("Concurrency limit of a slice unit %s is contained in has been reached.", d->name);
else if (endswith(d->name, ".service")) { else if (endswith(d->name, ".service")) {
/* Job result is unknown. For services, let's also try Result property. */ /* Job result is unknown. For services, let's also try Result property. */
_cleanup_free_ char *result = NULL; _cleanup_free_ char *result = NULL;
@ -281,6 +283,8 @@ static int check_wait_response(BusWaitForJobs *d, WaitJobsFlags flags, const cha
return -ESTALE; return -ESTALE;
else if (streq(d->result, "frozen")) else if (streq(d->result, "frozen"))
return -EDEADLK; return -EDEADLK;
else if (streq(d->result, "concurrency"))
return -ETOOMANYREFS;
return log_debug_errno(SYNTHETIC_ERRNO(ENOMEDIUM), return log_debug_errno(SYNTHETIC_ERRNO(ENOMEDIUM),
"Unexpected job result '%s' for unit '%s', assuming server side newer than us.", "Unexpected job result '%s' for unit '%s', assuming server side newer than us.",

View File

@ -56,6 +56,9 @@ int notify_push_fd(int fd, const char *name) {
if (!state) if (!state)
return -ENOMEM; return -ENOMEM;
/* Remove existing fds with the same name in fdstore. */
(void) notify_remove_fd_warn(name);
return sd_pid_notify_with_fds(0, /* unset_environment = */ false, state, &fd, 1); return sd_pid_notify_with_fds(0, /* unset_environment = */ false, state, &fd, 1);
} }

View File

@ -255,6 +255,11 @@ typedef struct UnitStatusInfo {
/* Swap */ /* Swap */
const char *what; const char *what;
/* Slice */
unsigned concurrency_hard_max;
unsigned concurrency_soft_max;
unsigned n_currently_active;
/* CGroup */ /* CGroup */
uint64_t memory_current; uint64_t memory_current;
uint64_t memory_peak; uint64_t memory_peak;
@ -711,6 +716,26 @@ static void print_status_info(
putchar('\n'); putchar('\n');
} }
if (endswith(i->id, ".slice")) {
printf(" Act. Units: %u", i->n_currently_active);
if (i->concurrency_soft_max != UINT_MAX || i->concurrency_hard_max != UINT_MAX) {
fputs(" (", stdout);
if (i->concurrency_soft_max != UINT_MAX && i->concurrency_soft_max < i->concurrency_hard_max) {
printf("soft limit: %u", i->concurrency_soft_max);
if (i->concurrency_hard_max != UINT_MAX)
fputs("; ", stdout);
}
if (i->concurrency_hard_max != UINT_MAX)
printf("hard limit: %u", i->concurrency_hard_max);
putchar(')');
}
putchar('\n');
}
if (i->ip_ingress_bytes != UINT64_MAX && i->ip_egress_bytes != UINT64_MAX) if (i->ip_ingress_bytes != UINT64_MAX && i->ip_egress_bytes != UINT64_MAX)
printf(" IP: %s in, %s out\n", printf(" IP: %s in, %s out\n",
FORMAT_BYTES(i->ip_ingress_bytes), FORMAT_BYTES(i->ip_ingress_bytes),
@ -2133,6 +2158,9 @@ static int show_one(
{ "SysFSPath", "s", NULL, offsetof(UnitStatusInfo, sysfs_path) }, { "SysFSPath", "s", NULL, offsetof(UnitStatusInfo, sysfs_path) },
{ "Where", "s", NULL, offsetof(UnitStatusInfo, where) }, { "Where", "s", NULL, offsetof(UnitStatusInfo, where) },
{ "What", "s", NULL, offsetof(UnitStatusInfo, what) }, { "What", "s", NULL, offsetof(UnitStatusInfo, what) },
{ "ConcurrencyHardMax", "u", NULL, offsetof(UnitStatusInfo, concurrency_hard_max) },
{ "ConcurrencySoftMax", "u", NULL, offsetof(UnitStatusInfo, concurrency_soft_max) },
{ "NCurrentlyActive", "u", NULL, offsetof(UnitStatusInfo, n_currently_active) },
{ "MemoryCurrent", "t", NULL, offsetof(UnitStatusInfo, memory_current) }, { "MemoryCurrent", "t", NULL, offsetof(UnitStatusInfo, memory_current) },
{ "MemoryPeak", "t", NULL, offsetof(UnitStatusInfo, memory_peak) }, { "MemoryPeak", "t", NULL, offsetof(UnitStatusInfo, memory_peak) },
{ "MemorySwapCurrent", "t", NULL, offsetof(UnitStatusInfo, memory_swap_current) }, { "MemorySwapCurrent", "t", NULL, offsetof(UnitStatusInfo, memory_swap_current) },

View File

@ -764,29 +764,6 @@ TEST(hashmap_free) {
} }
} }
typedef struct Item {
int seen;
} Item;
static void item_seen(Item *item) {
item->seen++;
}
TEST(hashmap_free_with_destructor) {
Hashmap *m;
struct Item items[4] = {};
unsigned i;
assert_se(m = hashmap_new(NULL));
for (i = 0; i < ELEMENTSOF(items) - 1; i++)
assert_se(hashmap_put(m, INT_TO_PTR(i), items + i) == 1);
m = hashmap_free_with_destructor(m, item_seen);
assert_se(items[0].seen == 1);
assert_se(items[1].seen == 1);
assert_se(items[2].seen == 1);
assert_se(items[3].seen == 0);
}
TEST(hashmap_first) { TEST(hashmap_first) {
_cleanup_hashmap_free_ Hashmap *m = NULL; _cleanup_hashmap_free_ Hashmap *m = NULL;

View File

@ -110,7 +110,7 @@ TEST(strv_make_nulstr) {
} }
TEST(set_make_nulstr) { TEST(set_make_nulstr) {
_cleanup_set_free_free_ Set *set = NULL; _cleanup_set_free_ Set *set = NULL;
size_t len = 0; size_t len = 0;
int r; int r;
@ -130,7 +130,7 @@ TEST(set_make_nulstr) {
static const char expect[] = { 0x00, 0x00 }; static const char expect[] = { 0x00, 0x00 };
_cleanup_free_ char *nulstr = NULL; _cleanup_free_ char *nulstr = NULL;
set = set_new(NULL); set = set_new(&string_hash_ops_free);
assert_se(set); assert_se(set);
r = set_make_nulstr(set, &nulstr, &len); r = set_make_nulstr(set, &nulstr, &len);

View File

@ -227,14 +227,14 @@ TEST(serialize_item_base64mem) {
TEST(serialize_string_set) { TEST(serialize_string_set) {
_cleanup_(unlink_tempfilep) char fn[] = "/tmp/test-serialize.XXXXXX"; _cleanup_(unlink_tempfilep) char fn[] = "/tmp/test-serialize.XXXXXX";
_cleanup_fclose_ FILE *f = NULL; _cleanup_fclose_ FILE *f = NULL;
_cleanup_set_free_free_ Set *s = NULL; _cleanup_set_free_ Set *s = NULL;
_cleanup_free_ char *line1 = NULL, *line2 = NULL; _cleanup_free_ char *line1 = NULL, *line2 = NULL;
char *p, *q; char *p, *q;
assert_se(fmkostemp_safe(fn, "r+", &f) == 0); assert_se(fmkostemp_safe(fn, "r+", &f) == 0);
log_info("/* %s (%s) */", __func__, fn); log_info("/* %s (%s) */", __func__, fn);
assert_se(set_ensure_allocated(&s, &string_hash_ops) >= 0); assert_se(set_ensure_allocated(&s, &string_hash_ops_free) >= 0);
assert_se(serialize_string_set(f, "a", s) == 0); assert_se(serialize_string_set(f, "a", s) == 0);

View File

@ -32,21 +32,6 @@ static void item_seen(Item *item) {
item->seen++; item->seen++;
} }
TEST(set_free_with_destructor) {
Set *m;
struct Item items[4] = {};
assert_se(m = set_new(NULL));
FOREACH_ARRAY(item, items, ELEMENTSOF(items) - 1)
assert_se(set_put(m, item) == 1);
m = set_free_with_destructor(m, item_seen);
assert_se(items[0].seen == 1);
assert_se(items[1].seen == 1);
assert_se(items[2].seen == 1);
assert_se(items[3].seen == 0);
}
DEFINE_PRIVATE_HASH_OPS_WITH_VALUE_DESTRUCTOR(item_hash_ops, void, trivial_hash_func, trivial_compare_func, Item, item_seen); DEFINE_PRIVATE_HASH_OPS_WITH_VALUE_DESTRUCTOR(item_hash_ops, void, trivial_hash_func, trivial_compare_func, Item, item_seen);
TEST(set_free_with_hash_ops) { TEST(set_free_with_hash_ops) {
@ -145,9 +130,8 @@ TEST(set_ensure_allocated) {
} }
TEST(set_copy) { TEST(set_copy) {
_cleanup_set_free_ Set *s = NULL; _cleanup_set_free_ Set *s = NULL, *copy = NULL;
_cleanup_set_free_free_ Set *copy = NULL; _cleanup_free_ char *key1 = NULL, *key2 = NULL, *key3 = NULL, *key4 = NULL;
char *key1, *key2, *key3, *key4;
key1 = strdup("key1"); key1 = strdup("key1");
assert_se(key1); assert_se(key1);

View File

@ -553,9 +553,6 @@ int manager_serialize_config(Manager *manager) {
if (r < 0) if (r < 0)
return log_warning_errno(r, "Failed to finalize serialization file: %m"); return log_warning_errno(r, "Failed to finalize serialization file: %m");
/* Remove the previous serialization to make it replaced with the new one. */
(void) notify_remove_fd_warn("config-serialization");
r = notify_push_fd(fileno(f), "config-serialization"); r = notify_push_fd(fileno(f), "config-serialization");
if (r < 0) if (r < 0)
return log_warning_errno(r, "Failed to push serialization fd to service manager: %m"); return log_warning_errno(r, "Failed to push serialization fd to service manager: %m");

View File

@ -1166,7 +1166,7 @@ static int manager_listen_fds(Manager *manager) {
int n = sd_listen_fds_with_names(/* unset_environment = */ true, &names); int n = sd_listen_fds_with_names(/* unset_environment = */ true, &names);
if (n < 0) if (n < 0)
return n; return log_error_errno(n, "Failed to listen on fds: %m");
for (int i = 0; i < n; i++) { for (int i = 0; i < n; i++) {
int fd = SD_LISTEN_FDS_START + i; int fd = SD_LISTEN_FDS_START + i;

View File

@ -60,7 +60,7 @@ sanitize_address_undefined = custom_target(
'fuzzers', 'fuzzers',
' '.join(fuzz_c_args + '-DFUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION'), ' '.join(fuzz_c_args + '-DFUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION'),
' '.join(fuzz_cpp_args + '-DFUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION'), ' '.join(fuzz_cpp_args + '-DFUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION'),
'-Dfuzz-tests=true -Db_lundef=false -Db_sanitize=address,undefined -Dnspawn=enabled --optimization=@0@ @1@ --auto-features=@2@'.format( '-Dfuzz-tests=true -Db_lundef=false -Db_sanitize=address,undefined --optimization=@0@ @1@ --auto-features=@2@'.format(
get_option('optimization'), get_option('optimization'),
get_option('werror') ? '--werror' : '', get_option('werror') ? '--werror' : '',
sanitize_auto_features sanitize_auto_features

View File

@ -0,0 +1,128 @@
#!/usr/bin/env bash
# SPDX-License-Identifier: LGPL-2.1-or-later
# shellcheck disable=SC2016
set -eux
set -o pipefail
# shellcheck source=test/units/test-control.sh
. "$(dirname "$0")"/test-control.sh
# shellcheck source=test/units/util.sh
. "$(dirname "$0")"/util.sh
cat >/etc/systemd/system/concurrency1.slice <<EOF
[Slice]
ConcurrencyHardMax=4
ConcurrencySoftMax=3
EOF
cat >/etc/systemd/system/sleepforever1@.service <<EOF
[Service]
Slice=concurrency1.slice
ExecStart=sleep infinity
EOF
cat >/etc/systemd/system/sync-on-sleepforever1@.service <<EOF
[Unit]
After=sleepforever1@%i.service
[Service]
ExecStart=true
EOF
cat >/etc/systemd/system/concurrency1-concurrency2.slice <<EOF
[Slice]
ConcurrencySoftMax=1
EOF
cat >/etc/systemd/system/sleepforever2@.service <<EOF
[Service]
Slice=concurrency1-concurrency2.slice
ExecStart=sleep infinity
EOF
cat >/etc/systemd/system/concurrency1-concurrency3.slice <<EOF
[Slice]
ConcurrencySoftMax=1
EOF
cat >/etc/systemd/system/sleepforever3@.service <<EOF
[Service]
Slice=concurrency1-concurrency3.slice
ExecStart=sleep infinity
EOF
systemctl daemon-reload
systemctl status concurrency1.slice ||:
(! systemctl is-active concurrency1.slice)
systemctl start sleepforever1@a.service
systemctl is-active concurrency1.slice
systemctl status concurrency1.slice
systemctl show concurrency1.slice
systemctl start sleepforever1@b.service
systemctl status concurrency1.slice
systemctl start sleepforever1@c.service
systemctl status concurrency1.slice
# The fourth call should hang because the soft limit is hit, verify that
timeout 1s systemctl start sleepforever1@d.service && test "$?" -eq 124
systemctl status concurrency1.slice
systemctl list-jobs
systemctl is-active sleepforever1@a.service
systemctl is-active sleepforever1@b.service
systemctl is-active sleepforever1@c.service
(! systemctl is-active sleepforever1@d.service)
systemctl status concurrency1.slice
# Now stop one, which should trigger the queued unit immediately
systemctl stop sleepforever1@b.service
# the 'd' instance should still be queued, now sync on it via another unit (which doesn't pull it in again, but is ordered after it)
systemctl start sync-on-sleepforever1@d.service
systemctl is-active sleepforever1@a.service
(! systemctl is-active sleepforever1@b.service)
systemctl is-active sleepforever1@c.service
systemctl is-active sleepforever1@d.service
# A fifth one should immediately fail because of the hard limit once we re-enqueue the fourth
systemctl --no-block start sleepforever1@b.service
(! systemctl start sleepforever1@e.service)
systemctl stop sleepforever1@b.service
systemctl stop sleepforever1@c.service
systemctl stop sleepforever1@d.service
# Now go for some nesting
systemctl start sleepforever2@a.service
systemctl is-active sleepforever2@a.service
systemctl is-active concurrency1-concurrency2.slice
systemctl status concurrency1.slice
systemctl status concurrency1-concurrency2.slice
# This service is in a sibling slice. Should be delayed
timeout 1s systemctl start sleepforever3@a.service && test "$?" -eq 124
# And the hard limit should make the next job completely fail
(! systemctl start sleepforever3@b.service)
# Stopping one service should not suffice to make the service run, because we need two slots: for slice and service
systemctl stop sleepforever2@a.service
timeout 1s systemctl start sleepforever3@a.service && test "$?" -eq 124
# Stopping one more slice should be enough though
systemctl stop concurrency1-concurrency2.slice
systemctl start sleepforever3@a.service
systemctl stop concurrency1.slice
systemctl reset-failed
rm /etc/systemd/system/concurrency1.slice
rm /etc/systemd/system/concurrency1-concurrency2.slice
rm /etc/systemd/system/concurrency1-concurrency3.slice
rm /etc/systemd/system/sleepforever1@.service
rm /etc/systemd/system/sync-on-sleepforever1@.service
rm /etc/systemd/system/sleepforever2@.service
rm /etc/systemd/system/sleepforever3@.service
systemctl daemon-reload