1
0
mirror of https://github.com/systemd/systemd synced 2025-09-29 00:34:45 +02:00

Compare commits

...

26 Commits

Author SHA1 Message Date
Yu Watanabe
33f7b61ca5
Merge pull request #18329 from poettering/notify-chroot
chroot/sd_notify() fixes
2021-01-21 13:16:59 +09:00
Lennart Poettering
9807fdc1da varlink: make 'userdata' pointer inheritance from varlink server to connection optional
@keszybz's right on
https://github.com/systemd/systemd/pull/18248#issuecomment-760798473:
swapping out the userdata pointer of a live varlink connection is iffy.

Let's fix this by making the userdata inheritance from VarlinkServer
object to the Varlink connection object optional: we want it for most
cases, but not all, i.e. all those cases where the calls implemented as
varlink methods are stateless and can be answered synchronously. For the
other cases (i.e. where we want per-connection objects that wrap the
asynchronous operation as it goes on) let's not do such inheritance but
initialize the userdata pointer only once we have it. THis means the
original manager object must be manually retrieved from the
VarlinkServer object, which in turn needs to be requested from the
Varlink connection object.

The userdata inheritance is now controlled by the
VARLINK_INHERIT_USERDATA flag passed at VarlinkServer construction.

Alternative-to: #18248
2021-01-21 07:31:58 +09:00
Yu Watanabe
4723205968
Merge pull request #18311 from poettering/sysext-fixups
sysext: post-merge fixups
2021-01-21 07:20:04 +09:00
Lennart Poettering
fe239c7d7d portabled: update profiles to current semantics
MountAPIVFS= implicitly mounts /run as tmpfs now, no need to do this
explicitly.

The notification socket is now implicitly mounted too, if NotifyAccess=
and RootImage=/RootDirectory= are used together.
2021-01-20 22:39:53 +01:00
Lennart Poettering
09872a6e1a man: document how get logging to work in a RootDirectory=/RootImage= environment
Fixes: #18051
2021-01-20 22:39:33 +01:00
Lennart Poettering
3bdc25a4cf core: make NotifyAccess= in combination with RootDirectory=/RootImage= work
Previously if people enabled RootDirectory=/RootImage= and NotifyAccess=
together, things wouldn't work, they'd have to explicitly add
BindReadOnlyPaths=/run/systemd/notify too.

Let's make this implicit. Since both options are opt-in, if people use
them together it would be pointless not also defining the
BindReadOnlyPaths= entry, in which case we can just do it automatically.

See: #18051
2021-01-20 22:39:07 +01:00
Luca Boccassi
7504f599e1
Merge pull request #18325 from ssahani/more-cleanup
Tree wide various cleanup
2021-01-20 20:48:18 +00:00
Lennart Poettering
301265ea10 man: document recent systemd-sysext interface changes 2021-01-20 17:50:23 +01:00
Lennart Poettering
8de42cb461 sysext: add --force swich for forcibly ignoring version incompatibilities 2021-01-20 17:50:23 +01:00
Lennart Poettering
8662fcbcf1 sysext: rework command line interface to be verb-based
As suggested by @yuwata:

https://github.com/systemd/systemd/pull/18181#pullrequestreview-570826113
2021-01-20 17:50:23 +01:00
Lennart Poettering
9901835d80 sysext: split version validation logic into function of its own
Just some simple refactoring to simplify the logic.
2021-01-20 17:44:53 +01:00
Lennart Poettering
1f3707aeea sysext: use log_setup_cli() 2021-01-20 17:44:53 +01:00
Susant Sahani
cecaba2003 btrfs-util: tighten variable scope used in loop 2021-01-20 15:14:30 +01:00
Susant Sahani
a67f102e79 analyze: tighten variable scope used in loop 2021-01-20 15:13:24 +01:00
Susant Sahani
c2484a7514 sd-event: Use hashmap_ensure_put 2021-01-20 15:13:21 +01:00
Susant Sahani
f656fdb623 sd-event: Use hashmap_ensure_put 2021-01-20 15:13:18 +01:00
Susant Sahani
639deab187 sd-device: Use TAKE_PTR 2021-01-20 15:13:13 +01:00
Susant Sahani
e8480482ca sd-device: Use hashmap_ensure_put 2021-01-20 15:13:08 +01:00
Susant Sahani
875038d5fe udev-rules: use ordered_hashmap_ensure_put 2021-01-20 15:13:02 +01:00
Susant Sahani
0c7bd7ecbd network: networkd-network use TAKE_PTR 2021-01-20 15:09:26 +01:00
Susant Sahani
6de530f2b8 network: Use hashmap_ensure_put 2021-01-20 15:09:20 +01:00
Susant Sahani
9b1fd1f55b network: ndisc - Use ordered_set_ensure_put 2021-01-20 15:09:14 +01:00
Susant Sahani
32ae5db60a machine: Use hashmap_ensure_put 2021-01-20 15:09:09 +01:00
Susant Sahani
9a8d1b455b logind: Use hashmap_ensure_put 2021-01-20 15:09:03 +01:00
Susant Sahani
8231485bc5 journal: Use cleanup_free 2021-01-20 15:08:59 +01:00
Susant Sahani
faa7e5a43b Journal: Use hashmap_ensure_put 2021-01-20 15:08:30 +01:00
32 changed files with 371 additions and 328 deletions

View File

@ -139,7 +139,7 @@
with newer ones, for example to install a locally compiled development version of some low-level
component over the immutable OS image without doing a full OS rebuild or modifying the nominally
immutable image. (e.g. "install" a locally built package with <command>DESTDIR=/var/lib/extensions/mytest
make install &amp;&amp; systemd-sysext --refresh</command>, making it available in
make install &amp;&amp; systemd-sysext refresh</command>, making it available in
<filename>/usr/</filename> as if it was installed in the OS image itself.) This case works regardless if
the underlying host <filename>/usr/</filename> is managed as immutable disk image or is a traditional
package manager controlled (i.e. writable) tree.</para>
@ -148,12 +148,19 @@
<refsect1>
<title>Commands</title>
<para>The following command switches are understood:</para>
<para>The following commands are understood:</para>
<variablelist>
<varlistentry>
<term><option>--merge</option></term>
<term><option>-m</option></term>
<term><option>status</option></term>
<listitem><para>When invoked without any command verb, or when <option>status</option> is specified
the current merge status is shown, separately for both <filename>/usr/</filename> and
<filename>/opt/</filename>.</para></listitem>
</varlistentry>
<varlistentry>
<term><option>merge</option></term>
<listitem><para>Merges all currently installed system extension images into
<filename>/usr/</filename> and <filename>/opt/</filename>, by overmounting these hierarchies with an
<literal>overlayfs</literal> file system combining the underlying hierarchies with those included in
@ -161,22 +168,20 @@
</varlistentry>
<varlistentry>
<term><option>--unmerge</option></term>
<term><option>-u</option></term>
<term><option>unmerge</option></term>
<listitem><para>Unmerges all currently installed system extension images from
<filename>/usr/</filename> and <filename>/opt/</filename>, by unmounting the
<literal>overlayfs</literal> file systems created by <option>--merge</option>
<literal>overlayfs</literal> file systems created by <option>merge</option>
prior.</para></listitem>
</varlistentry>
<varlistentry>
<term><option>--refresh</option></term>
<term><option>-R</option></term>
<listitem><para>A combination of <option>--unmerge</option> and <option>--merge</option>: if already
<term><option>refresh</option></term>
<listitem><para>A combination of <option>unmerge</option> and <option>merge</option>: if already
mounted the existing <literal>overlayfs</literal> instance is unmounted temporarily, and then
replaced by a new version. This command is useful after installing/removing system extension images,
in order to update the <literal>overlayfs</literal> file system accordingly. If no system extensions
are installed when this command is executed, the equivalent of <option>--unmerge</option> is
are installed when this command is executed, the equivalent of <option>unmerge</option> is
executed, without establishing any new <literal>overlayfs</literal> instance. Note that currently
there's a brief moment where neither the old nor the new <literal>overlayfs</literal> file system is
mounted. This implies that all resources supplied by a system extension will briefly disappear — even
@ -184,8 +189,7 @@
</varlistentry>
<varlistentry>
<term><option>--list</option></term>
<term><option>-l</option></term>
<term><option>list</option></term>
<listitem><para>A brief list of installed extension images is shown.</para></listitem>
</varlistentry>
@ -193,9 +197,6 @@
<xi:include href="standard-options.xml" xpointer="help" />
<xi:include href="standard-options.xml" xpointer="version" />
</variablelist>
<para>When invoked without any command switches, the current merge status is shown, separately for both
<filename>/usr/</filename> and <filename>/opt/</filename>.</para>
</refsect1>
<refsect1>
@ -218,6 +219,15 @@
output style, or explicitly disabling JSON output.</para></listitem>
</varlistentry>
<varlistentry>
<term><option>--force</option></term>
<listitem><para>When merging system extensions into <filename>/usr/</filename> and
<filename>/opt/</filename>, ignore version incompatibilities, i.e. force merging regardless of
whether the version information included in the extension images matches the host or
not.</para></listitem>
</varlistentry>
<xi:include href="standard-options.xml" xpointer="no-pager" />
</variablelist>
</refsect1>

View File

@ -117,6 +117,20 @@
<para>The <varname>MountAPIVFS=</varname> and <varname>PrivateUsers=</varname> settings are particularly useful
in conjunction with <varname>RootDirectory=</varname>. For details, see below.</para>
<para>If <varname>RootDirectory=</varname>/<varname>RootImage=</varname> are used together with
<varname>NotifyAccess=</varname> the notification socket is automatically mounted from the host into
the root environment, to ensure the notification interface can work correctly.</para>
<para>Note that services using <varname>RootDirectory=</varname>/<varname>RootImage=</varname> will
not be able to log via the syslog or journal protocols to the host logging infrastructure, unless the
relevant sockets are mounted from the host, specifically:</para>
<example>
<title>Mounting logging sockets into root environment</title>
<programlisting>BindReadOnlyPaths=/dev/log /run/systemd/journal/socket /run/systemd/journal/stdout</programlisting>
</example>
<xi:include href="system-only.xml" xpointer="singular"/></listitem>
</varlistentry>

View File

@ -1022,7 +1022,6 @@ static int list_dependencies(sd_bus *bus, const char *name) {
static int analyze_critical_chain(int argc, char *argv[], void *userdata) {
_cleanup_(sd_bus_flush_close_unrefp) sd_bus *bus = NULL;
_cleanup_(unit_times_freep) struct unit_times *times = NULL;
struct unit_times *u;
Hashmap *h;
int n, r;
@ -1038,7 +1037,7 @@ static int analyze_critical_chain(int argc, char *argv[], void *userdata) {
if (!h)
return log_oom();
for (u = times; u->has_data; u++) {
for (struct unit_times *u = times; u->has_data; u++) {
r = hashmap_put(h, u->name, u);
if (r < 0)
return log_error_errno(r, "Failed to add entry to hashmap: %m");
@ -1065,7 +1064,6 @@ static int analyze_blame(int argc, char *argv[], void *userdata) {
_cleanup_(sd_bus_flush_close_unrefp) sd_bus *bus = NULL;
_cleanup_(unit_times_freep) struct unit_times *times = NULL;
_cleanup_(table_unrefp) Table *table = NULL;
struct unit_times *u;
TableCell *cell;
int n, r;
@ -1105,7 +1103,7 @@ static int analyze_blame(int argc, char *argv[], void *userdata) {
if (r < 0)
return r;
for (u = times; u->has_data; u++) {
for (struct unit_times *u = times; u->has_data; u++) {
if (u->time <= 0)
continue;

View File

@ -673,7 +673,7 @@ int btrfs_qgroup_get_quota(const char *path, uint64_t qgroupid, BtrfsQuotaInfo *
int btrfs_subvol_find_subtree_qgroup(int fd, uint64_t subvol_id, uint64_t *ret) {
uint64_t level, lowest = (uint64_t) -1, lowest_qgroupid = 0;
_cleanup_free_ uint64_t *qgroups = NULL;
int r, n, i;
int r, n;
assert(fd >= 0);
assert(ret);
@ -703,7 +703,7 @@ int btrfs_subvol_find_subtree_qgroup(int fd, uint64_t subvol_id, uint64_t *ret)
if (n < 0)
return n;
for (i = 0; i < n; i++) {
for (int i = 0; i < n; i++) {
uint64_t id;
r = btrfs_qgroupid_split(qgroups[i], &level, &id);
@ -824,7 +824,6 @@ int btrfs_qgroup_set_limit_fd(int fd, uint64_t qgroupid, uint64_t referenced_max
.lim.max_rfer = referenced_max,
.lim.flags = BTRFS_QGROUP_LIMIT_MAX_RFER,
};
unsigned c;
int r;
assert(fd >= 0);
@ -843,7 +842,7 @@ int btrfs_qgroup_set_limit_fd(int fd, uint64_t qgroupid, uint64_t referenced_max
args.qgroupid = qgroupid;
for (c = 0;; c++) {
for (unsigned c = 0;; c++) {
if (ioctl(fd, BTRFS_IOC_QGROUP_LIMIT, &args) < 0) {
if (errno == EBUSY && c < 10) {
@ -924,7 +923,6 @@ static int qgroup_create_or_destroy(int fd, bool b, uint64_t qgroupid) {
.create = b,
.qgroupid = qgroupid,
};
unsigned c;
int r;
r = btrfs_is_filesystem(fd);
@ -933,7 +931,7 @@ static int qgroup_create_or_destroy(int fd, bool b, uint64_t qgroupid) {
if (r == 0)
return -ENOTTY;
for (c = 0;; c++) {
for (unsigned c = 0;; c++) {
if (ioctl(fd, BTRFS_IOC_QGROUP_CREATE, &args) < 0) {
/* On old kernels if quota is not enabled, we get EINVAL. On newer kernels we get
@ -968,7 +966,7 @@ int btrfs_qgroup_destroy(int fd, uint64_t qgroupid) {
int btrfs_qgroup_destroy_recursive(int fd, uint64_t qgroupid) {
_cleanup_free_ uint64_t *qgroups = NULL;
uint64_t subvol_id;
int i, n, r;
int n, r;
/* Destroys the specified qgroup, but unassigns it from all
* its parents first. Also, it recursively destroys all
@ -983,7 +981,7 @@ int btrfs_qgroup_destroy_recursive(int fd, uint64_t qgroupid) {
if (n < 0)
return n;
for (i = 0; i < n; i++) {
for (int i = 0; i < n; i++) {
uint64_t id;
r = btrfs_qgroupid_split(qgroups[i], NULL, &id);
@ -1043,7 +1041,6 @@ static int qgroup_assign_or_unassign(int fd, bool b, uint64_t child, uint64_t pa
.src = child,
.dst = parent,
};
unsigned c;
int r;
r = btrfs_is_filesystem(fd);
@ -1052,7 +1049,7 @@ static int qgroup_assign_or_unassign(int fd, bool b, uint64_t child, uint64_t pa
if (r == 0)
return -ENOTTY;
for (c = 0;; c++) {
for (unsigned c = 0;; c++) {
r = ioctl(fd, BTRFS_IOC_QGROUP_ASSIGN, &args);
if (r < 0) {
if (errno == EBUSY && c < 10) {
@ -1351,7 +1348,7 @@ int btrfs_qgroup_copy_limits(int fd, uint64_t old_qgroupid, uint64_t new_qgroupi
static int copy_quota_hierarchy(int fd, uint64_t old_subvol_id, uint64_t new_subvol_id) {
_cleanup_free_ uint64_t *old_qgroups = NULL, *old_parent_qgroups = NULL;
bool copy_from_parent = false, insert_intermediary_qgroup = false;
int n_old_qgroups, n_old_parent_qgroups, r, i;
int n_old_qgroups, n_old_parent_qgroups, r;
uint64_t old_parent_id;
assert(fd >= 0);
@ -1375,9 +1372,8 @@ static int copy_quota_hierarchy(int fd, uint64_t old_subvol_id, uint64_t new_sub
return n_old_parent_qgroups;
}
for (i = 0; i < n_old_qgroups; i++) {
for (int i = 0; i < n_old_qgroups; i++) {
uint64_t id;
int j;
r = btrfs_qgroupid_split(old_qgroups[i], NULL, &id);
if (r < 0)
@ -1392,7 +1388,7 @@ static int copy_quota_hierarchy(int fd, uint64_t old_subvol_id, uint64_t new_sub
break;
}
for (j = 0; j < n_old_parent_qgroups; j++)
for (int j = 0; j < n_old_parent_qgroups; j++)
if (old_parent_qgroups[j] == old_qgroups[i])
/* The old subvolume shared a common
* parent qgroup with its parent
@ -1880,12 +1876,11 @@ int btrfs_subvol_auto_qgroup_fd(int fd, uint64_t subvol_id, bool insert_intermed
if (insert_intermediary_qgroup) {
uint64_t lowest = 256, new_qgroupid;
bool created = false;
int i;
/* Determine the lowest qgroup that the parent
* subvolume is assigned to. */
for (i = 0; i < n; i++) {
for (int i = 0; i < n; i++) {
uint64_t level;
r = btrfs_qgroupid_split(qgroups[i], &level, NULL);
@ -1910,7 +1905,7 @@ int btrfs_subvol_auto_qgroup_fd(int fd, uint64_t subvol_id, bool insert_intermed
if (r >= 0)
changed = created = true;
for (i = 0; i < n; i++) {
for (int i = 0; i < n; i++) {
r = btrfs_qgroup_assign(fd, new_qgroupid, qgroups[i]);
if (r < 0 && r != -EEXIST) {
if (created)

View File

@ -432,7 +432,7 @@ int manager_varlink_init(Manager *m) {
if (!MANAGER_IS_SYSTEM(m))
return 0;
r = varlink_server_new(&s, VARLINK_SERVER_ACCOUNT_UID);
r = varlink_server_new(&s, VARLINK_SERVER_ACCOUNT_UID|VARLINK_SERVER_INHERIT_USERDATA);
if (r < 0)
return log_error_errno(r, "Failed to allocate varlink server object: %m");

View File

@ -3223,6 +3223,7 @@ static int apply_mount_namespace(
context->root_verity,
propagate_dir,
incoming_dir,
root_dir || root_image ? params->notify_socket : NULL,
DISSECT_IMAGE_DISCARD_ON_LOOP|DISSECT_IMAGE_RELAX_VAR_CHECK|DISSECT_IMAGE_FSCK,
error_path);

View File

@ -384,6 +384,8 @@ struct ExecParameters {
/* An fd that is closed by the execve(), and thus will result in EOF when the execve() is done */
int exec_fd;
const char *notify_socket;
};
#include "unit.h"

View File

@ -1302,7 +1302,8 @@ static size_t namespace_calculate_mounts(
const char* var_tmp_dir,
const char *creds_path,
const char* log_namespace,
bool setup_propagate) {
bool setup_propagate,
const char* notify_socket) {
size_t protect_home_cnt;
size_t protect_system_cnt =
@ -1329,7 +1330,6 @@ static size_t namespace_calculate_mounts(
n_bind_mounts +
n_mount_images +
n_temporary_filesystems +
(setup_propagate ? 1 : 0) + /* /run/systemd/incoming */
ns_info->private_dev +
(ns_info->protect_kernel_tunables ? ELEMENTSOF(protect_kernel_tunables_table) : 0) +
(ns_info->protect_kernel_modules ? ELEMENTSOF(protect_kernel_modules_table) : 0) +
@ -1339,7 +1339,9 @@ static size_t namespace_calculate_mounts(
(ns_info->protect_hostname ? 2 : 0) +
(namespace_info_mount_apivfs(ns_info) ? ELEMENTSOF(apivfs_table) : 0) +
(creds_path ? 2 : 1) +
!!log_namespace;
!!log_namespace +
setup_propagate + /* /run/systemd/incoming */
!!notify_socket;
}
static void normalize_mounts(const char *root_directory, MountEntry *mounts, size_t *n_mounts) {
@ -1491,6 +1493,7 @@ int setup_namespace(
const char *verity_data_path,
const char *propagate_dir,
const char *incoming_dir,
const char *notify_socket,
DissectImageFlags dissect_image_flags,
char **error_path) {
@ -1593,7 +1596,8 @@ int setup_namespace(
tmp_dir, var_tmp_dir,
creds_path,
log_namespace,
setup_propagate);
setup_propagate,
notify_socket);
if (n_mounts > 0) {
m = mounts = new0(MountEntry, n_mounts);
@ -1771,6 +1775,14 @@ int setup_namespace(
.read_only = true,
};
if (notify_socket)
*(m++) = (MountEntry) {
.path_const = notify_socket,
.source_const = notify_socket,
.mode = BIND_MOUNT,
.read_only = true,
};
assert(mounts + n_mounts == m);
/* Prepend the root directory where that's necessary */

View File

@ -129,6 +129,7 @@ int setup_namespace(
const char *root_verity,
const char *propagate_dir,
const char *incoming_dir,
const char *notify_socket,
DissectImageFlags dissected_image_flags,
char **error_path);

View File

@ -1474,10 +1474,13 @@ static int service_spawn(
if (!our_env)
return -ENOMEM;
if (service_exec_needs_notify_socket(s, flags))
if (service_exec_needs_notify_socket(s, flags)) {
if (asprintf(our_env + n_env++, "NOTIFY_SOCKET=%s", UNIT(s)->manager->notify_socket) < 0)
return -ENOMEM;
exec_params.notify_socket = UNIT(s)->manager->notify_socket;
}
if (s->main_pid > 0)
if (asprintf(our_env + n_env++, "MAINPID="PID_FMT, s->main_pid) < 0)
return -ENOMEM;

View File

@ -956,7 +956,7 @@ static int manager_bind_varlink(Manager *m) {
assert(m);
assert(!m->varlink_server);
r = varlink_server_new(&m->varlink_server, VARLINK_SERVER_ACCOUNT_UID);
r = varlink_server_new(&m->varlink_server, VARLINK_SERVER_ACCOUNT_UID|VARLINK_SERVER_INHERIT_USERDATA);
if (r < 0)
return log_error_errno(r, "Failed to allocate varlink server object: %m");

View File

@ -2033,7 +2033,7 @@ static int server_open_varlink(Server *s, const char *socket, int fd) {
assert(s);
r = varlink_server_new(&s->varlink_server, VARLINK_SERVER_ROOT_ONLY);
r = varlink_server_new(&s->varlink_server, VARLINK_SERVER_ROOT_ONLY|VARLINK_SERVER_INHERIT_USERDATA);
if (r < 0)
return r;

View File

@ -732,15 +732,13 @@ _public_ int sd_device_monitor_filter_add_match_subsystem_devtype(sd_device_moni
return -ENOMEM;
}
r = hashmap_ensure_allocated(&m->subsystem_filter, NULL);
r = hashmap_ensure_put(&m->subsystem_filter, NULL, s, d);
if (r < 0)
return r;
r = hashmap_put(m->subsystem_filter, s, d);
if (r < 0)
return r;
TAKE_PTR(s);
TAKE_PTR(d);
s = d = NULL;
m->filter_uptodate = false;
return 0;

View File

@ -629,10 +629,6 @@ static int event_make_signal_data(
return 0;
}
} else {
r = hashmap_ensure_allocated(&e->signal_data, &uint64_hash_ops);
if (r < 0)
return r;
d = new(struct signal_data, 1);
if (!d)
return -ENOMEM;
@ -643,7 +639,7 @@ static int event_make_signal_data(
.priority = priority,
};
r = hashmap_put(e->signal_data, &d->priority, d);
r = hashmap_ensure_put(&e->signal_data, &uint64_hash_ops, &d->priority, d);
if (r < 0) {
free(d);
return r;
@ -1727,10 +1723,6 @@ static int event_make_inotify_data(
fd = fd_move_above_stdio(fd);
r = hashmap_ensure_allocated(&e->inotify_data, &uint64_hash_ops);
if (r < 0)
return r;
d = new(struct inotify_data, 1);
if (!d)
return -ENOMEM;
@ -1741,7 +1733,7 @@ static int event_make_inotify_data(
.priority = priority,
};
r = hashmap_put(e->inotify_data, &d->priority, d);
r = hashmap_ensure_put(&e->inotify_data, &uint64_hash_ops, &d->priority, d);
if (r < 0) {
d->fd = safe_close(d->fd);
free(d);

View File

@ -63,7 +63,7 @@ static bool journal_pid_changed(sd_journal *j) {
}
static int journal_put_error(sd_journal *j, int r, const char *path) {
char *copy;
_cleanup_free_ char *copy = NULL;
int k;
/* Memorize an error we encountered, and store which
@ -80,27 +80,21 @@ static int journal_put_error(sd_journal *j, int r, const char *path) {
if (r >= 0)
return r;
k = hashmap_ensure_allocated(&j->errors, NULL);
if (k < 0)
return k;
if (path) {
copy = strdup(path);
if (!copy)
return -ENOMEM;
} else
copy = NULL;
}
k = hashmap_put(j->errors, INT_TO_PTR(r), copy);
k = hashmap_ensure_put(&j->errors, NULL, INT_TO_PTR(r), copy);
if (k < 0) {
free(copy);
if (k == -EEXIST)
return 0;
return k;
}
TAKE_PTR(copy);
return 0;
}

View File

@ -217,10 +217,6 @@ int manager_write_brightness(
return 0;
}
r = hashmap_ensure_allocated(&m->brightness_writers, &brightness_writer_hash_ops);
if (r < 0)
return log_oom();
w = new(BrightnessWriter, 1);
if (!w)
return log_oom();
@ -234,9 +230,12 @@ int manager_write_brightness(
if (!w->path)
return log_oom();
r = hashmap_put(m->brightness_writers, w->path, w);
r = hashmap_ensure_put(&m->brightness_writers, &brightness_writer_hash_ops, w->path, w);
if (r == -ENOMEM)
return log_oom();
if (r < 0)
return log_error_errno(r, "Failed to add brightness writer to hashmap: %m");
w->manager = m;
r = set_add_message(&w->current_messages, message);

View File

@ -390,10 +390,6 @@ static int image_object_find(sd_bus *bus, const char *path, const char *interfac
return 1;
}
r = hashmap_ensure_allocated(&m->image_cache, &image_hash_ops);
if (r < 0)
return r;
if (!m->image_cache_defer_event) {
r = sd_event_add_defer(m->event, &m->image_cache_defer_event, image_flush_cache, m);
if (r < 0)
@ -416,7 +412,7 @@ static int image_object_find(sd_bus *bus, const char *path, const char *interfac
image->userdata = m;
r = hashmap_put(m->image_cache, image->name, image);
r = hashmap_ensure_put(&m->image_cache, &image_hash_ops, image->name, image);
if (r < 0) {
image_unref(image);
return r;

View File

@ -388,7 +388,7 @@ int manager_varlink_init(Manager *m) {
if (m->varlink_server)
return 0;
r = varlink_server_new(&s, VARLINK_SERVER_ACCOUNT_UID);
r = varlink_server_new(&s, VARLINK_SERVER_ACCOUNT_UID|VARLINK_SERVER_INHERIT_USERDATA);
if (r < 0)
return log_error_errno(r, "Failed to allocate varlink server object: %m");

View File

@ -1544,11 +1544,9 @@ int config_parse_address_generation_type(
token->prefix = buffer.in6;
}
r = ordered_set_ensure_allocated(&network->ipv6_tokens, &ipv6_token_hash_ops);
if (r < 0)
r = ordered_set_ensure_put(&network->ipv6_tokens, &ipv6_token_hash_ops, token);
if (r == -ENOMEM)
return log_oom();
r = ordered_set_put(network->ipv6_tokens, token);
if (r == -EEXIST)
log_syntax(unit, LOG_DEBUG, filename, line, r,
"IPv6 token '%s' is duplicated, ignoring: %m", rvalue);

View File

@ -805,11 +805,9 @@ int config_parse_stacked_netdev(const char *unit,
if (!name)
return log_oom();
r = hashmap_ensure_allocated(h, &string_hash_ops);
if (r < 0)
r = hashmap_ensure_put(h, &string_hash_ops, name, INT_TO_PTR(kind));
if (r == -ENOMEM)
return log_oom();
r = hashmap_put(*h, name, INT_TO_PTR(kind));
if (r < 0)
log_syntax(unit, LOG_WARNING, filename, line, r,
"Cannot add NetDev '%s' to network, ignoring assignment: %m", name);
@ -817,7 +815,7 @@ int config_parse_stacked_netdev(const char *unit,
log_syntax(unit, LOG_DEBUG, filename, line, r,
"NetDev '%s' specified twice, ignoring.", name);
else
name = NULL;
TAKE_PTR(name);
return 0;
}

View File

@ -2,8 +2,6 @@
[Service]
MountAPIVFS=yes
TemporaryFileSystem=/run
BindReadOnlyPaths=/run/systemd/notify
BindReadOnlyPaths=/dev/log /run/systemd/journal/socket /run/systemd/journal/stdout
BindReadOnlyPaths=/etc/machine-id
BindReadOnlyPaths=/etc/resolv.conf

View File

@ -2,8 +2,6 @@
[Service]
MountAPIVFS=yes
TemporaryFileSystem=/run
BindReadOnlyPaths=/run/systemd/notify
BindReadOnlyPaths=/dev/log /run/systemd/journal/socket /run/systemd/journal/stdout
BindReadOnlyPaths=/etc/machine-id
BindReadOnlyPaths=/run/dbus/system_bus_socket

View File

@ -2,8 +2,6 @@
[Service]
MountAPIVFS=yes
TemporaryFileSystem=/run
BindReadOnlyPaths=/run/systemd/notify
BindReadOnlyPaths=/dev/log /run/systemd/journal/socket /run/systemd/journal/stdout
BindReadOnlyPaths=/etc/machine-id
DynamicUser=yes

View File

@ -2,6 +2,5 @@
[Service]
MountAPIVFS=yes
BindPaths=/run
BindReadOnlyPaths=/etc/machine-id
BindReadOnlyPaths=/etc/resolv.conf

View File

@ -269,11 +269,13 @@ static int vl_method_resolve_hostname(Varlink *link, JsonVariant *parameters, Va
_cleanup_(lookup_parameters_destroy) LookupParameters p = {
.family = AF_UNSPEC,
};
Manager *m = userdata;
DnsQuery *q;
Manager *m;
int r;
assert(link);
m = varlink_server_get_userdata(varlink_get_server(link));
assert(m);
if (FLAGS_SET(flags, VARLINK_METHOD_ONEWAY))
@ -447,11 +449,13 @@ static int vl_method_resolve_address(Varlink *link, JsonVariant *parameters, Var
_cleanup_(lookup_parameters_destroy) LookupParameters p = {
.family = AF_UNSPEC,
};
Manager *m = userdata;
DnsQuery *q;
Manager *m;
int r;
assert(link);
m = varlink_server_get_userdata(varlink_get_server(link));
assert(m);
if (FLAGS_SET(flags, VARLINK_METHOD_ONEWAY))

View File

@ -2137,7 +2137,9 @@ int varlink_server_add_connection(VarlinkServer *server, int fd, Varlink **ret)
return r;
v->fd = fd;
if (server->flags & VARLINK_SERVER_INHERIT_USERDATA)
v->userdata = server->userdata;
if (ucred_acquired) {
v->ucred = ucred;
v->ucred_acquired = true;

View File

@ -44,8 +44,9 @@ typedef enum VarlinkServerFlags {
VARLINK_SERVER_ROOT_ONLY = 1 << 0, /* Only accessible by root */
VARLINK_SERVER_MYSELF_ONLY = 1 << 1, /* Only accessible by our own UID */
VARLINK_SERVER_ACCOUNT_UID = 1 << 2, /* Do per user accounting */
VARLINK_SERVER_INHERIT_USERDATA = 1 << 3, /* Initialize Varlink connection userdata from VarlinkServer userdata */
_VARLINK_SERVER_FLAGS_ALL = (1 << 3) - 1,
_VARLINK_SERVER_FLAGS_ALL = (1 << 4) - 1,
} VarlinkServerFlags;
typedef int (*VarlinkMethod)(Varlink *link, JsonVariant *parameters, VarlinkMethodFlags flags, void *userdata);

View File

@ -29,18 +29,13 @@
#include "stat-util.h"
#include "terminal-util.h"
#include "user-util.h"
#include "verbs.h"
static enum {
ACTION_STATUS,
ACTION_MERGE,
ACTION_UNMERGE,
ACTION_REFRESH,
ACTION_LIST,
} arg_action = ACTION_STATUS;
static char **arg_hierarchies = NULL; /* "/usr" + "/opt" by default */
static char *arg_root = NULL;
static JsonFormatFlags arg_json_format_flags = JSON_FORMAT_OFF;
static PagerFlags arg_pager_flags = 0;
static bool arg_force = false;
STATIC_DESTRUCTOR_REGISTER(arg_hierarchies, strv_freep);
STATIC_DESTRUCTOR_REGISTER(arg_root, freep);
@ -149,7 +144,15 @@ static int unmerge(void) {
return ret;
}
static int status(void) {
static int verb_unmerge(int argc, char **argv, void *userdata) {
if (!have_effective_cap(CAP_SYS_ADMIN))
return log_error_errno(SYNTHETIC_ERRNO(EPERM), "Need to be privileged.");
return unmerge();
}
static int verb_status(int argc, char **argv, void *userdata) {
_cleanup_(table_unrefp) Table *t = NULL;
int r, ret = 0;
char **p;
@ -399,6 +402,76 @@ static int strverscmpp(char *const* a, char *const* b) {
return strverscmp(*a, *b);
}
static int validate_version(
const char *root,
const char *name,
const char *host_os_release_id,
const char *host_os_release_version_id,
const char *host_os_release_sysext_level) {
_cleanup_free_ char *extension_release_id = NULL, *extension_release_version_id = NULL, *extension_release_sysext_level = NULL;
int r;
assert(root);
assert(name);
if (arg_force) {
log_debug("Force mode enabled, skipping version validation.");
return 1;
}
/* Insist that extension images do not overwrite the underlying OS release file (it's fine if
* they place one in /etc/os-release, i.e. where things don't matter, as they aren't
* merged.) */
r = chase_symlinks("/usr/lib/os-release", root, CHASE_PREFIX_ROOT, NULL, NULL);
if (r < 0) {
if (r != -ENOENT)
return log_error_errno(r, "Failed to determine whether /usr/lib/os-release exists in the extension image: %m");
} else
return log_error_errno(SYNTHETIC_ERRNO(EINVAL),
"Extension image contains /usr/lib/os-release file, which is not allowed (it may carry /etc/os-release), refusing.");
/* Now that we can look into the extension image, let's see if the OS version is compatible */
r = parse_extension_release(
root,
name,
"ID", &extension_release_id,
"VERSION_ID", &extension_release_version_id,
"SYSEXT_LEVEL", &extension_release_sysext_level,
NULL);
if (r == -ENOENT) {
log_notice_errno(r, "Extension '%s' carries no extension-release data, ignoring extension.", name);
return 0;
}
if (r < 0)
return log_error_errno(r, "Failed to acquire 'os-release' data of extension '%s': %m", name);
if (!streq_ptr(host_os_release_id, extension_release_id)) {
log_notice("Extension '%s' is for OS '%s', but running on '%s', ignoring extension.",
name, strna(extension_release_id), strna(host_os_release_id));
return 0;
}
/* If the extension has a sysext API level declared, then it must match the host API
* level. Otherwise, compare OS version as a whole */
if (extension_release_sysext_level) {
if (!streq_ptr(host_os_release_sysext_level, extension_release_sysext_level)) {
log_notice("Extension '%s' is for sysext API level '%s', but running on sysext API level '%s', ignoring extension.",
name, extension_release_sysext_level, strna(host_os_release_sysext_level));
return 0;
}
} else {
if (!streq_ptr(host_os_release_version_id, extension_release_version_id)) {
log_notice("Extension '%s' is for OS version '%s', but running on OS version '%s', ignoring extension.",
name, extension_release_version_id, strna(host_os_release_version_id));
return 0;
}
}
log_debug("Version info of extension '%s' matches host.", name);
return 1;
}
static int merge_subprocess(Hashmap *images, const char *workspace) {
_cleanup_free_ char *host_os_release_id = NULL, *host_os_release_version_id = NULL, *host_os_release_sysext_level = NULL,
*buf = NULL;
@ -440,8 +513,7 @@ static int merge_subprocess(Hashmap *images, const char *workspace) {
/* Let's now mount all images */
HASHMAP_FOREACH(img, images) {
_cleanup_free_ char *p = NULL,
*extension_release_id = NULL, *extension_release_version_id = NULL, *extension_release_sysext_level = NULL;
_cleanup_free_ char *p = NULL;
p = path_join(workspace, "extensions", img->name);
if (!p)
@ -523,57 +595,17 @@ static int merge_subprocess(Hashmap *images, const char *workspace) {
assert_not_reached("Unsupported image type");
}
/* Insist that extension images do not overwrite the underlying OS release file (it's fine if
* they place one in /etc/os-release, i.e. where things don't matter, as they aren't
* merged.) */
r = chase_symlinks("/usr/lib/os-release", p, CHASE_PREFIX_ROOT, NULL, NULL);
if (r < 0) {
if (r != -ENOENT)
return log_error_errno(r, "Failed to determine whether /usr/lib/os-release exists in the extension image: %m");
} else
return log_error_errno(SYNTHETIC_ERRNO(EINVAL),
"Extension image contains /usr/lib/os-release file, which is not allowed (it may carry /etc/os-release), refusing.");
/* Now that we can look into the extension image, let's see if the OS version is compatible */
r = parse_extension_release(
r = validate_version(
p,
img->name,
"ID", &extension_release_id,
"VERSION_ID", &extension_release_version_id,
"SYSEXT_LEVEL", &extension_release_sysext_level,
NULL);
if (r == -ENOENT) {
log_notice_errno(r, "Extension '%s' carries no extension-release data, ignoring extension.", img->name);
host_os_release_id,
host_os_release_version_id,
host_os_release_sysext_level);
if (r < 0)
return r;
if (r == 0) {
n_ignored++;
continue;
} else if (r < 0)
return log_error_errno(r, "Failed to acquire 'os-release' data of extension '%s': %m", img->name);
else {
if (!streq_ptr(host_os_release_id, extension_release_id)) {
log_notice("Extension '%s' is for OS '%s', but running on '%s', ignoring extension.",
img->name, strna(extension_release_id), strna(host_os_release_id));
n_ignored++;
continue;
}
/* If the extension has a sysext API level declared, then it must match the host API level. Otherwise, compare OS version as a whole */
if (extension_release_sysext_level) {
if (!streq_ptr(host_os_release_sysext_level, extension_release_sysext_level)) {
log_notice("Extension '%s' is for sysext API level '%s', but running on sysext API level '%s', ignoring extension.",
img->name, extension_release_sysext_level, strna(host_os_release_sysext_level));
n_ignored++;
continue;
}
} else {
if (!streq_ptr(host_os_release_version_id, extension_release_version_id)) {
log_notice("Extension '%s' is for OS version '%s', but running on OS version '%s', ignoring extension.",
img->name, extension_release_version_id, strna(host_os_release_version_id));
n_ignored++;
continue;
}
}
log_debug("Version info of extension '%s' matches host.", img->name);
}
/* Noice! This one is an extension we want. */
@ -711,7 +743,134 @@ static int merge(Hashmap *images) {
return r != 123; /* exit code 123 means: didn't do anything */
}
static int help(void) {
static int verb_merge(int argc, char **argv, void *userdata) {
_cleanup_(hashmap_freep) Hashmap *images = NULL;
char **p;
int r;
if (!have_effective_cap(CAP_SYS_ADMIN))
return log_error_errno(SYNTHETIC_ERRNO(EPERM), "Need to be privileged.");
images = hashmap_new(&image_hash_ops);
if (!images)
return log_oom();
r = image_discover(IMAGE_EXTENSION, arg_root, images);
if (r < 0)
return log_error_errno(r, "Failed to discover extension images: %m");
/* In merge mode fail if things are already merged. (In --refresh mode below we'll unmerge if we find
* things are already merged...) */
STRV_FOREACH(p, arg_hierarchies) {
_cleanup_free_ char *resolved = NULL;
r = chase_symlinks(*p, arg_root, CHASE_PREFIX_ROOT, &resolved, NULL);
if (r == -ENOENT) {
log_debug_errno(r, "Hierarchy '%s%s' does not exist, ignoring.", strempty(arg_root), *p);
continue;
}
if (r < 0)
return log_error_errno(r, "Failed to resolve path to hierarchy '%s%s': %m", strempty(arg_root), *p);
r = is_our_mount_point(resolved);
if (r < 0)
return r;
if (r > 0)
return log_error_errno(SYNTHETIC_ERRNO(EBUSY),
"Hierarchy '%s' is already merged.", *p);
}
return merge(images);
}
static int verb_refresh(int argc, char **argv, void *userdata) {
_cleanup_(hashmap_freep) Hashmap *images = NULL;
int r;
if (!have_effective_cap(CAP_SYS_ADMIN))
return log_error_errno(SYNTHETIC_ERRNO(EPERM), "Need to be privileged.");
images = hashmap_new(&image_hash_ops);
if (!images)
return log_oom();
r = image_discover(IMAGE_EXTENSION, arg_root, images);
if (r < 0)
return log_error_errno(r, "Failed to discover extension images: %m");
r = merge(images); /* Returns > 0 if it did something, i.e. a new overlayfs is mounted now. When it
* does so it implicitly unmounts any overlayfs placed there before. Returns == 0
* if it did nothing, i.e. no extension images found. In this case the old
* overlayfs remains in place if there was one. */
if (r < 0)
return r;
if (r == 0) /* No images found? Then unmerge. The goal of --refresh is after all that after having
* called there's a guarantee that the merge status matches the installed extensions. */
r = unmerge();
/* Net result here is that:
*
* 1. If an overlayfs was mounted before and no extensions exist anymore, we'll have unmerged things.
*
* 2. If an overlayfs was mounted before, and there are still extensions installed' we'll have
* unmerged and then merged things again.
*
* 3. If an overlayfs so far wasn't mounted, and there are extensions installed, we'll have it
* mounted now.
*
* 4. If there was no overlayfs mount so far, and no extensions installed, we implement a NOP.
*/
return 0;
}
static int verb_list(int argc, char **argv, void *userdata) {
_cleanup_(hashmap_freep) Hashmap *images = NULL;
_cleanup_(table_unrefp) Table *t = NULL;
Image *img;
int r;
images = hashmap_new(&image_hash_ops);
if (!images)
return log_oom();
r = image_discover(IMAGE_EXTENSION, arg_root, images);
if (r < 0)
return log_error_errno(r, "Failed to discover extension images: %m");
if ((arg_json_format_flags & JSON_FORMAT_OFF) && hashmap_isempty(images)) {
log_info("No OS extensions found.");
return 0;
}
t = table_new("name", "type", "path", "time");
if (!t)
return log_oom();
HASHMAP_FOREACH(img, images) {
r = table_add_many(
t,
TABLE_STRING, img->name,
TABLE_STRING, image_type_to_string(img->type),
TABLE_PATH, img->path,
TABLE_TIMESTAMP, img->mtime != 0 ? img->mtime : img->crtime);
if (r < 0)
return table_log_add_error(r);
}
(void) table_set_sort(t, (size_t) 0, (size_t) -1);
if (arg_json_format_flags & (JSON_FORMAT_OFF|JSON_FORMAT_PRETTY|JSON_FORMAT_PRETTY_AUTO))
(void) pager_open(arg_pager_flags);
r = table_print_json(t, stdout, arg_json_format_flags);
if (r < 0)
return table_log_print_error(r);
return 0;
}
static int verb_help(int argc, char **argv, void *userdata) {
_cleanup_free_ char *link = NULL;
int r;
@ -722,17 +881,19 @@ static int help(void) {
printf("%1$s [OPTIONS...] [DEVICE]\n"
"\n%5$sMerge extension images into /usr/ and /opt/ hierarchies.%6$s\n"
"\n%3$sCommands:%4$s\n"
" status Show current merge status (default)\n"
" merge Merge extensions into /usr/ and /opt/\n"
" unmerge Unmerge extensions from /usr/ and /opt/\n"
" refresh Unmerge/merge extensions again\n"
" list List installed extensions\n"
" -h --help Show this help\n"
" --version Show package version\n"
" -m --merge Merge extensions into /usr/ and /opt/\n"
" -u --unmerge Unmerge extensions from /usr/ and /opt/\n"
" -R --refresh Unmerge/merge extensions again\n"
" -l --list List all OS images\n"
"\n%3$sOptions:%4$s\n"
" --no-pager Do not pipe output into a pager\n"
" --root=PATH Operate relative to root path\n"
" --json=pretty|short|off\n"
" Generate JSON output\n"
" --force Ignore version incompatibilities\n"
"\nSee the %2$s for details.\n"
, program_invocation_short_name
, link
@ -748,12 +909,9 @@ static int parse_argv(int argc, char *argv[]) {
enum {
ARG_VERSION = 0x100,
ARG_NO_PAGER,
ARG_MERGE,
ARG_UNMERGE,
ARG_REFRESH,
ARG_LIST,
ARG_ROOT,
ARG_JSON,
ARG_FORCE,
};
static const struct option options[] = {
@ -761,11 +919,8 @@ static int parse_argv(int argc, char *argv[]) {
{ "version", no_argument, NULL, ARG_VERSION },
{ "no-pager", no_argument, NULL, ARG_NO_PAGER },
{ "root", required_argument, NULL, ARG_ROOT },
{ "merge", no_argument, NULL, 'm' },
{ "unmerge", no_argument, NULL, 'u' },
{ "refresh", no_argument, NULL, 'R' },
{ "list", no_argument, NULL, 'l' },
{ "json", required_argument, NULL, ARG_JSON },
{ "force", no_argument, NULL, ARG_FORCE },
{}
};
@ -774,12 +929,12 @@ static int parse_argv(int argc, char *argv[]) {
assert(argc >= 0);
assert(argv);
while ((c = getopt_long(argc, argv, "hmuRl", options, NULL)) >= 0)
while ((c = getopt_long(argc, argv, "h", options, NULL)) >= 0)
switch (c) {
case 'h':
return help();
return verb_help(argc, argv, NULL);
case ARG_VERSION:
return version();
@ -788,22 +943,6 @@ static int parse_argv(int argc, char *argv[]) {
arg_pager_flags |= PAGER_DISABLE;
break;
case 'm':
arg_action = ACTION_MERGE;
break;
case 'u':
arg_action = ACTION_UNMERGE;
break;
case 'R':
arg_action = ACTION_REFRESH;
break;
case 'l':
arg_action = ACTION_LIST;
break;
case ARG_ROOT:
r = parse_path_argument_and_warn(optarg, false, &arg_root);
if (r < 0)
@ -817,6 +956,10 @@ static int parse_argv(int argc, char *argv[]) {
break;
case ARG_FORCE:
arg_force = true;
break;
case '?':
return -EINVAL;
@ -824,10 +967,6 @@ static int parse_argv(int argc, char *argv[]) {
assert_not_reached("Unhandled option");
}
if (argc - optind > 0)
return log_error_errno(SYNTHETIC_ERRNO(EINVAL),
"Unexpected argument.");
return 1;
}
@ -871,13 +1010,25 @@ static int parse_env(void) {
return 0;
}
static int sysext_main(int argc, char *argv[]) {
static const Verb verbs[] = {
{ "status", VERB_ANY, 1, VERB_DEFAULT, verb_status },
{ "merge", VERB_ANY, 1, 0, verb_merge },
{ "unmerge", VERB_ANY, 1, 0, verb_unmerge },
{ "refresh", VERB_ANY, 1, 0, verb_refresh },
{ "list", VERB_ANY, 1, 0, verb_list },
{ "help", VERB_ANY, 1, 0, verb_help },
{}
};
return dispatch_verb(argc, argv, verbs, NULL);
}
static int run(int argc, char *argv[]) {
_cleanup_(hashmap_freep) Hashmap *images = NULL;
int r;
log_show_color(true);
log_parse_environment();
log_open();
log_setup_cli();
r = parse_argv(argc, argv);
if (r <= 0)
@ -893,126 +1044,7 @@ static int run(int argc, char *argv[]) {
return log_oom();
}
/* Given that things deep down in the child process will fail, let's catch the no-privilege issue
* early on */
if (!IN_SET(arg_action, ACTION_STATUS, ACTION_LIST) && !have_effective_cap(CAP_SYS_ADMIN))
return log_error_errno(SYNTHETIC_ERRNO(EPERM), "Need to be privileged.");
if (arg_action == ACTION_STATUS)
return status();
if (arg_action == ACTION_UNMERGE)
return unmerge();
images = hashmap_new(&image_hash_ops);
if (!images)
return log_oom();
r = image_discover(IMAGE_EXTENSION, arg_root, images);
if (r < 0)
return log_error_errno(r, "Failed to discover extension images: %m");
switch (arg_action) {
case ACTION_LIST: {
_cleanup_(table_unrefp) Table *t = NULL;
Image *img;
if ((arg_json_format_flags & JSON_FORMAT_OFF) && hashmap_isempty(images)) {
log_info("No OS extensions found.");
return 0;
}
t = table_new("name", "type", "path", "time");
if (!t)
return log_oom();
HASHMAP_FOREACH(img, images) {
r = table_add_many(
t,
TABLE_STRING, img->name,
TABLE_STRING, image_type_to_string(img->type),
TABLE_PATH, img->path,
TABLE_TIMESTAMP, img->mtime != 0 ? img->mtime : img->crtime);
if (r < 0)
return table_log_add_error(r);
}
(void) table_set_sort(t, (size_t) 0, (size_t) -1);
if (arg_json_format_flags & (JSON_FORMAT_OFF|JSON_FORMAT_PRETTY|JSON_FORMAT_PRETTY_AUTO))
(void) pager_open(arg_pager_flags);
r = table_print_json(t, stdout, arg_json_format_flags);
if (r < 0)
return table_log_print_error(r);
r = 0;
break;
}
case ACTION_MERGE: {
char **p;
/* In merge mode fail if things are already merged. (In --refresh mode below we'll unmerge if
* we find things are already merged...) */
STRV_FOREACH(p, arg_hierarchies) {
_cleanup_free_ char *resolved = NULL;
r = chase_symlinks(*p, arg_root, CHASE_PREFIX_ROOT, &resolved, NULL);
if (r == -ENOENT) {
log_debug_errno(r, "Hierarchy '%s%s' does not exist, ignoring.", strempty(arg_root), *p);
continue;
}
if (r < 0)
return log_error_errno(r, "Failed to resolve path to hierarchy '%s%s': %m", strempty(arg_root), *p);
r = is_our_mount_point(resolved);
if (r < 0)
return r;
if (r > 0)
return log_error_errno(SYNTHETIC_ERRNO(EBUSY),
"Hierarchy '%s' is already merged.", *p);
}
r = merge(images);
break;
}
case ACTION_REFRESH:
r = merge(images); /* Returns > 0 if it did something, i.e. a new overlayfs is mounted
* now. When it does so it implicitly unmounts any overlayfs placed there
* before. Returns == 0 if it did nothing, i.e. no extension images
* found. In this case the old overlayfs remains in place if there was
* one. */
if (r < 0)
return r;
if (r == 0) /* No images found? Then unmerge. The goal of --refresh is after all that after
* having called there's a guarantee that the merge status matches the installed
* extensions. */
r = unmerge();
/* Net result here is that:
*
* 1. If an overlayfs was mounted before and no extensions exist anymore, we'll have unmerged
* things.
*
* 2. If an overlayfs was mounted before, and there are still extensions installed' we'll
* have unmerged and then merged things again.
*
* 3. If an overlayfs so far wasn't mounted, and there are extensions installed, we'll have
* it mounted now.
*
* 4. If there was no overlayfs mount so far, and no extensions installed, we implement a
* NOP.
*/
break;
default:
assert_not_reached("Uneexpected action");
}
return r;
return sysext_main(argc, argv);
}
DEFINE_MAIN_FUNCTION(run);

View File

@ -174,6 +174,7 @@ static void test_protect_kernel_logs(void) {
NULL,
NULL,
NULL,
NULL,
0,
NULL);
assert_se(r == 0);

View File

@ -89,6 +89,7 @@ int main(int argc, char *argv[]) {
NULL,
NULL,
NULL,
NULL,
0,
NULL);
if (r < 0) {

View File

@ -2147,19 +2147,17 @@ static int udev_rule_apply_token_to_event(
if (IN_SET(token->op, OP_ASSIGN, OP_ASSIGN_FINAL))
ordered_hashmap_clear_free_key(event->run_list);
r = ordered_hashmap_ensure_allocated(&event->run_list, NULL);
if (r < 0)
return log_oom();
(void) udev_event_apply_format(event, token->value, buf, sizeof(buf), false);
cmd = strdup(buf);
if (!cmd)
return log_oom();
r = ordered_hashmap_put(event->run_list, cmd, token->data);
if (r < 0)
r = ordered_hashmap_ensure_put(&event->run_list, NULL, cmd, token->data);
if (r == -ENOMEM)
return log_oom();
if (r < 0)
return log_rule_error_errno(dev, rules, r, "Failed to store command '%s': %m", cmd);
TAKE_PTR(cmd);

View File

@ -24,8 +24,8 @@ ConditionDirectoryNotEmpty=|/usr/lib/extensions
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=systemd-sysext --merge
ExecStop=systemd-sysext --unmerge
ExecStart=systemd-sysext merge
ExecStop=systemd-sysext unmerge
[Install]
WantedBy=sysinit.target