Compare commits
20 Commits
61dbcae179
...
3730f117d4
Author | SHA1 | Date |
---|---|---|
Daan De Meyer | 3730f117d4 | |
Daan De Meyer | fee2b5aa48 | |
Daan De Meyer | 729e26e8f9 | |
Daan De Meyer | 099b16c3e7 | |
Daan De Meyer | 7a7f306b6c | |
Yu Watanabe | 4f2975385f | |
Daan De Meyer | 0432e28394 | |
Yu Watanabe | fc956a3973 | |
Yu Watanabe | b0dbb4aa3a | |
Yu Watanabe | a95ae2d36a | |
Yu Watanabe | be8e4b1a87 | |
Adrian Vovk | cf612c5fd5 | |
Adrian Vovk | 2cb9c68c3a | |
Adrian Vovk | 78e9059208 | |
Adrian Vovk | e671bdc5c3 | |
Yu Watanabe | 572d031eca | |
Yu Watanabe | 25da422bd1 | |
Yu Watanabe | 5872ea7008 | |
Lennart Poettering | a2369d0224 | |
Lennart Poettering | a37640653c |
2
TODO
2
TODO
|
@ -189,6 +189,8 @@ Features:
|
|||
* go through our codebase, and convert "vertical tables" (i.e. things such as
|
||||
"systemctl status") to use table_new_vertical() for output
|
||||
|
||||
* pcrlock: add support for multi-profile UKIs
|
||||
|
||||
* logind: when logging in use new tmpfs quota support to configure quota on
|
||||
/tmp/ + /dev/shm/. But do so only in case of tmpfs, because otherwise quota
|
||||
is persistent and any persistent settings mean we don#t have to reapply them.
|
||||
|
|
|
@ -76,16 +76,7 @@
|
|||
<term><varname>Type=</varname></term>
|
||||
|
||||
<listitem><para>The GPT partition type UUID to match. This may be a GPT partition type UUID such as
|
||||
<constant>4f68bce3-e8cd-4db1-96e7-fbcaf984b709</constant>, or an identifier.
|
||||
Architecture specific partition types can use one of these architecture identifiers:
|
||||
<constant>alpha</constant>, <constant>arc</constant>, <constant>arm</constant> (32-bit),
|
||||
<constant>arm64</constant> (64-bit, aka aarch64), <constant>ia64</constant>,
|
||||
<constant>loongarch64</constant>, <constant>mips-le</constant>, <constant>mips64-le</constant>,
|
||||
<constant>parisc</constant>, <constant>ppc</constant>, <constant>ppc64</constant>,
|
||||
<constant>ppc64-le</constant>, <constant>riscv32</constant>, <constant>riscv64</constant>,
|
||||
<constant>s390</constant>, <constant>s390x</constant>, <constant>tilegx</constant>,
|
||||
<constant>x86</constant> (32-bit, aka i386) and <constant>x86-64</constant> (64-bit, aka amd64).
|
||||
</para>
|
||||
<constant>4f68bce3-e8cd-4db1-96e7-fbcaf984b709</constant>, or an identifier.</para>
|
||||
|
||||
<para>The supported identifiers are:</para>
|
||||
|
||||
|
@ -237,7 +228,14 @@
|
|||
</tgroup>
|
||||
</table>
|
||||
|
||||
<para>This setting defaults to <constant>linux-generic</constant>.</para>
|
||||
<para>Architecture specific partition types can use one of these architecture identifiers:
|
||||
<constant>alpha</constant>, <constant>arc</constant>, <constant>arm</constant> (32-bit),
|
||||
<constant>arm64</constant> (64-bit, aka aarch64), <constant>ia64</constant>,
|
||||
<constant>loongarch64</constant>, <constant>mips-le</constant>, <constant>mips64-le</constant>,
|
||||
<constant>parisc</constant>, <constant>ppc</constant>, <constant>ppc64</constant>,
|
||||
<constant>ppc64-le</constant>, <constant>riscv32</constant>, <constant>riscv64</constant>,
|
||||
<constant>s390</constant>, <constant>s390x</constant>, <constant>tilegx</constant>,
|
||||
<constant>x86</constant> (32-bit, aka i386) and <constant>x86-64</constant> (64-bit, aka amd64).</para>
|
||||
|
||||
<para>Most of the partition type UUIDs listed above are defined in the <ulink
|
||||
url="https://uapi-group.org/specifications/specs/discoverable_partitions_specification">Discoverable Partitions
|
||||
|
@ -897,6 +895,59 @@
|
|||
|
||||
<xi:include href="version-info.xml" xpointer="v257"/></listitem>
|
||||
</varlistentry>
|
||||
|
||||
<varlistentry>
|
||||
<term><varname>SupplementFor=</varname></term>
|
||||
|
||||
<listitem><para>Takes a partition definition name, such as <literal>10-esp</literal>. If specified,
|
||||
<command>systemd-repart</command> will avoid creating this partition and instead prefer to partially
|
||||
merge the two definitions. However, depending on the existing layout of partitions on disk,
|
||||
<command>systemd-repart</command> may be forced to fall back onto un-merging the definitions and
|
||||
using them as originally written, potentially creating this partition. Specifically,
|
||||
<command>systemd-repart</command> will fall back if this partition is found to already exist on disk,
|
||||
or if the target partition already exists on disk but is too small, or if it cannot allocate space
|
||||
for the merged partition for some other reason.</para>
|
||||
|
||||
<para>The following fields are merged into the target definition in the specified ways:
|
||||
<varname>Weight=</varname> and <varname>PaddingWeight=</varname> are simply overwritten;
|
||||
<varname>SizeMinBytes=</varname> and <varname>PaddingMinBytes=</varname> use the larger of the two
|
||||
values; <varname>SizeMaxBytes=</varname> and <varname>PaddingMaxBytes=</varname> use the smaller
|
||||
value; and <varname>CopyFiles=</varname>, <varname>ExcludeFiles=</varname>,
|
||||
<varname>ExcludeFilesTarget=</varname>, <varname>MakeDirectories=</varname>, and
|
||||
<varname>Subvolumes=</varname> are concatenated.</para>
|
||||
|
||||
<para>Usage of this option in combination with <varname>CopyBlocks=</varname>,
|
||||
<varname>Encrypt=</varname>, or <varname>Verity=</varname> is not supported. The target definition
|
||||
cannot set these settings either. A definition cannot simultaneously be a supplement and act as a
|
||||
target for some other supplement definition. A target cannot have more than one supplement partition
|
||||
associated with it.</para>
|
||||
|
||||
<para>For example, distributions can use this to implement <variable>$BOOT</variable> as defined in
|
||||
the <ulink url="https://uapi-group.org/specifications/specs/boot_loader_specification/">Boot Loader
|
||||
Specification</ulink>. Distributions may prefer to use the ESP as <variable>$BOOT</variable> whenever
|
||||
possible, but to adhere to the spec XBOOTLDR must sometimes be used instead. So, they should create
|
||||
two definitions: the first defining an ESP big enough to hold just the bootloader, and a second for
|
||||
the XBOOTLDR that's sufficiently large to hold kernels and configured as a supplement for the ESP.
|
||||
Whenever possible, <command>systemd-repart</command> will try to merge the two definitions to create
|
||||
one large ESP, but if that's not allowable due to the existing conditions on disk a small ESP and a
|
||||
large XBOOTLDR will be created instead.</para>
|
||||
|
||||
<para>As another example, distributions can also use this to seamlessly share a single
|
||||
<filename>/home</filename> partition in a multi-boot scenario, while preferring to keep
|
||||
<filename>/home</filename> on the root partition by default. Having a <filename>/home</filename>
|
||||
partition separated from the root partition entails some extra complexity: someone has to decide how
|
||||
to split the space between the two partitions. On the other hand, it allows a user to share their
|
||||
home area between multiple installed OSs (i.e. via <citerefentry><refentrytitle>systemd-homed.service
|
||||
</refentrytitle><manvolnum>8</manvolnum></citerefentry>). Distributions should create two definitions:
|
||||
the first for a root partition that takes up some relatively small percentage of the disk, and the
|
||||
second as a supplement for the first to create a <filename>/home</filename> partition that takes up
|
||||
all the remaining free space. On first boot, if <command>systemd-repart</command> finds an existing
|
||||
<filename>/home</filename> partition on disk, it'll un-merge the definitions and create just a small
|
||||
root partition. Otherwise, the definitions will be merged and a single large root partition will be
|
||||
created.</para>
|
||||
|
||||
<xi:include href="version-info.xml" xpointer="v257"/></listitem>
|
||||
</varlistentry>
|
||||
</variablelist>
|
||||
</refsect1>
|
||||
|
||||
|
|
|
@ -300,9 +300,10 @@ int log_emergency_level(void);
|
|||
#define log_dump(level, buffer) \
|
||||
log_dump_internal(level, 0, PROJECT_FILE, __LINE__, __func__, buffer)
|
||||
|
||||
#define log_oom() log_oom_internal(LOG_ERR, PROJECT_FILE, __LINE__, __func__)
|
||||
#define log_oom_debug() log_oom_internal(LOG_DEBUG, PROJECT_FILE, __LINE__, __func__)
|
||||
#define log_oom_warning() log_oom_internal(LOG_WARNING, PROJECT_FILE, __LINE__, __func__)
|
||||
#define log_oom_full(level) log_oom_internal(level, PROJECT_FILE, __LINE__, __func__)
|
||||
#define log_oom() log_oom_full(LOG_ERR)
|
||||
#define log_oom_debug() log_oom_full(LOG_DEBUG)
|
||||
#define log_oom_warning() log_oom_full(LOG_WARNING)
|
||||
|
||||
bool log_on_console(void) _pure_;
|
||||
|
||||
|
|
|
@ -153,7 +153,7 @@ bool strv_overlap(char * const *a, char * const *b) _pure_;
|
|||
_STRV_FOREACH_BACKWARDS(s, l, UNIQ_T(h, UNIQ), UNIQ_T(i, UNIQ))
|
||||
|
||||
#define _STRV_FOREACH_PAIR(x, y, l, i) \
|
||||
for (typeof(*l) *x, *y, *i = (l); \
|
||||
for (typeof(*(l)) *x, *y, *i = (l); \
|
||||
i && *(x = i) && *(y = i + 1); \
|
||||
i += 2)
|
||||
|
||||
|
|
|
@ -2424,6 +2424,55 @@ static int has_regular_user(void) {
|
|||
return false;
|
||||
}
|
||||
|
||||
static int acquire_group_list(char ***ret) {
|
||||
_cleanup_(userdb_iterator_freep) UserDBIterator *iterator = NULL;
|
||||
_cleanup_strv_free_ char **groups = NULL;
|
||||
int r;
|
||||
|
||||
assert(ret);
|
||||
|
||||
r = groupdb_all(USERDB_SUPPRESS_SHADOW|USERDB_EXCLUDE_DYNAMIC_USER, &iterator);
|
||||
if (r == -ENOLINK)
|
||||
log_debug_errno(r, "No groups found. (Didn't check via Varlink.)");
|
||||
else if (r == -ESRCH)
|
||||
log_debug_errno(r, "No groups found.");
|
||||
else if (r < 0)
|
||||
return log_debug_errno(r, "Failed to enumerate groups, ignoring: %m");
|
||||
else {
|
||||
for (;;) {
|
||||
_cleanup_(group_record_unrefp) GroupRecord *gr = NULL;
|
||||
|
||||
r = groupdb_iterator_get(iterator, &gr);
|
||||
if (r == -ESRCH)
|
||||
break;
|
||||
if (r < 0)
|
||||
return log_debug_errno(r, "Failed acquire next group: %m");
|
||||
|
||||
if (!IN_SET(group_record_disposition(gr), USER_REGULAR, USER_SYSTEM))
|
||||
continue;
|
||||
|
||||
if (group_record_disposition(gr) == USER_REGULAR) {
|
||||
_cleanup_(user_record_unrefp) UserRecord *ur = NULL;
|
||||
|
||||
r = userdb_by_name(gr->group_name, USERDB_SUPPRESS_SHADOW|USERDB_EXCLUDE_DYNAMIC_USER, &ur);
|
||||
if (r < 0 && r != -ESRCH)
|
||||
return log_debug_errno(r, "Failed to check if matching user exists for group '%s': %m", gr->group_name);
|
||||
|
||||
if (r >= 0 && user_record_disposition(ur) == USER_REGULAR)
|
||||
continue;
|
||||
|
||||
}
|
||||
|
||||
r = strv_extend(&groups, gr->group_name);
|
||||
if (r < 0)
|
||||
return log_oom();
|
||||
}
|
||||
}
|
||||
|
||||
*ret = TAKE_PTR(groups);
|
||||
return !!*ret;
|
||||
}
|
||||
|
||||
static int create_interactively(void) {
|
||||
_cleanup_(sd_bus_flush_close_unrefp) sd_bus *bus = NULL;
|
||||
_cleanup_free_ char *username = NULL;
|
||||
|
@ -2463,7 +2512,7 @@ static int create_interactively(void) {
|
|||
continue;
|
||||
}
|
||||
|
||||
r = userdb_by_name(username, USERDB_SUPPRESS_SHADOW|USERDB_EXCLUDE_DYNAMIC_USER, /* ret= */ NULL);
|
||||
r = userdb_by_name(username, USERDB_SUPPRESS_SHADOW, /* ret= */ NULL);
|
||||
if (r == -ESRCH)
|
||||
break;
|
||||
if (r < 0)
|
||||
|
@ -2476,47 +2525,8 @@ static int create_interactively(void) {
|
|||
if (r < 0)
|
||||
return log_error_errno(r, "Failed to set userName field: %m");
|
||||
|
||||
_cleanup_(userdb_iterator_freep) UserDBIterator *iterator = NULL;
|
||||
_cleanup_strv_free_ char **available = NULL, **groups = NULL;
|
||||
|
||||
r = groupdb_all(USERDB_SUPPRESS_SHADOW|USERDB_EXCLUDE_DYNAMIC_USER, &iterator);
|
||||
if (r == -ENOLINK)
|
||||
log_debug_errno(r, "No entries found. (Didn't check via Varlink.)");
|
||||
else if (r == -ESRCH)
|
||||
log_debug_errno(r, "No entries found.");
|
||||
else if (r < 0)
|
||||
return log_error_errno(r, "Failed to enumerate groups: %m");
|
||||
else {
|
||||
for (;;) {
|
||||
_cleanup_(group_record_unrefp) GroupRecord *gr = NULL;
|
||||
|
||||
r = groupdb_iterator_get(iterator, &gr);
|
||||
if (r == -ESRCH)
|
||||
break;
|
||||
if (r < 0)
|
||||
return log_error_errno(r, "Failed acquire next group: %m");
|
||||
|
||||
if (!IN_SET(group_record_disposition(gr), USER_REGULAR, USER_SYSTEM))
|
||||
continue;
|
||||
|
||||
if (group_record_disposition(gr) == USER_REGULAR) {
|
||||
_cleanup_(user_record_unrefp) UserRecord *ur = NULL;
|
||||
|
||||
r = userdb_by_name(gr->group_name, USERDB_SUPPRESS_SHADOW|USERDB_EXCLUDE_DYNAMIC_USER, &ur);
|
||||
if (r < 0 && r != -ESRCH)
|
||||
return log_error_errno(r, "Failed to check if matching user exists for group '%s': %m", gr->group_name);
|
||||
|
||||
if (r >= 0 && user_record_disposition(ur) == USER_REGULAR)
|
||||
continue;
|
||||
|
||||
}
|
||||
|
||||
r = strv_extend(&available, gr->group_name);
|
||||
if (r < 0)
|
||||
return log_oom();
|
||||
}
|
||||
}
|
||||
|
||||
for (;;) {
|
||||
_cleanup_free_ char *s = NULL;
|
||||
unsigned u;
|
||||
|
@ -2531,6 +2541,16 @@ static int create_interactively(void) {
|
|||
break;
|
||||
|
||||
if (streq(s, "list")) {
|
||||
if (!available) {
|
||||
r = acquire_group_list(&available);
|
||||
if (r < 0)
|
||||
log_warning_errno(r, "Failed to enumerate available groups, ignoring: %m");
|
||||
if (r == 0)
|
||||
log_notice("Did not find any available groups");
|
||||
if (r <= 0)
|
||||
continue;
|
||||
}
|
||||
|
||||
r = show_menu(available, /*n_columns=*/ 3, /*width=*/ 20, /*percentage=*/ 60);
|
||||
if (r < 0)
|
||||
return r;
|
||||
|
@ -2579,6 +2599,42 @@ static int create_interactively(void) {
|
|||
return log_error_errno(r, "Failed to set memberOf field: %m");
|
||||
}
|
||||
|
||||
_cleanup_free_ char *shell = NULL;
|
||||
|
||||
for (;;) {
|
||||
shell = mfree(shell);
|
||||
|
||||
r = ask_string(&shell,
|
||||
"%s Please enter the shell to use for user %s (empty to skip): ",
|
||||
special_glyph(SPECIAL_GLYPH_TRIANGULAR_BULLET), username);
|
||||
if (r < 0)
|
||||
return log_error_errno(r, "Failed to query user for username: %m");
|
||||
|
||||
if (isempty(shell)) {
|
||||
log_info("No data entered, skipping.");
|
||||
break;
|
||||
}
|
||||
|
||||
if (!valid_shell(shell)) {
|
||||
log_notice("Specified shell is not a valid UNIX shell path, try again: %s", shell);
|
||||
continue;
|
||||
}
|
||||
|
||||
r = RET_NERRNO(access(shell, F_OK));
|
||||
if (r < 0)
|
||||
return log_error_errno(r, "Failed to check if shell %s exists: %m", shell);
|
||||
if (r >= 0)
|
||||
break;
|
||||
|
||||
log_notice("Specified shell '%s' is not installed, try another one.", shell);
|
||||
}
|
||||
|
||||
if (shell) {
|
||||
r = sd_json_variant_set_field_string(&arg_identity_extra, "shell", shell);
|
||||
if (r < 0)
|
||||
return log_error_errno(r, "Failed to set shell field: %m");
|
||||
}
|
||||
|
||||
return create_home_common(/* input= */ NULL);
|
||||
}
|
||||
|
||||
|
|
|
@ -973,7 +973,7 @@ int netdev_load_one(Manager *manager, const char *filename) {
|
|||
if (r < 0)
|
||||
return r;
|
||||
|
||||
log_netdev_debug(netdev, "loaded \"%s\"", netdev_kind_to_string(netdev->kind));
|
||||
log_syntax(/* unit = */ NULL, LOG_DEBUG, filename, /* config_line = */ 0, /* error = */ 0, "Successfully loaded.");
|
||||
|
||||
r = netdev_request_to_create(netdev);
|
||||
if (r < 0)
|
||||
|
|
|
@ -6,6 +6,7 @@
|
|||
#include <linux/if_arp.h>
|
||||
|
||||
#include "alloc-util.h"
|
||||
#include "device-private.h"
|
||||
#include "dhcp-client-internal.h"
|
||||
#include "hostname-setup.h"
|
||||
#include "hostname-util.h"
|
||||
|
@ -1428,27 +1429,33 @@ static int dhcp4_set_request_address(Link *link) {
|
|||
}
|
||||
|
||||
static bool link_needs_dhcp_broadcast(Link *link) {
|
||||
const char *val;
|
||||
int r;
|
||||
|
||||
assert(link);
|
||||
assert(link->network);
|
||||
|
||||
/* Return the setting in DHCP[4].RequestBroadcast if specified. Otherwise return the device property
|
||||
* ID_NET_DHCP_BROADCAST setting, which may be set for interfaces requiring that the DHCPOFFER message
|
||||
* is being broadcast because they can't handle unicast messages while not fully configured.
|
||||
* If neither is set or a failure occurs, return false, which is the default for this flag.
|
||||
*/
|
||||
r = link->network->dhcp_broadcast;
|
||||
if (r < 0 && link->dev && sd_device_get_property_value(link->dev, "ID_NET_DHCP_BROADCAST", &val) >= 0) {
|
||||
r = parse_boolean(val);
|
||||
if (r < 0)
|
||||
log_link_debug_errno(link, r, "DHCPv4 CLIENT: Failed to parse ID_NET_DHCP_BROADCAST, ignoring: %m");
|
||||
else
|
||||
log_link_debug(link, "DHCPv4 CLIENT: Detected ID_NET_DHCP_BROADCAST='%d'.", r);
|
||||
* ID_NET_DHCP_BROADCAST setting, which may be set for interfaces requiring that the DHCPOFFER
|
||||
* message is being broadcast because they can't handle unicast messages while not fully configured.
|
||||
* If neither is set or a failure occurs, return false, which is the default for this flag. */
|
||||
|
||||
r = link->network->dhcp_broadcast;
|
||||
if (r >= 0)
|
||||
return r;
|
||||
|
||||
if (!link->dev)
|
||||
return false;
|
||||
|
||||
r = device_get_property_bool(link->dev, "ID_NET_DHCP_BROADCAST");
|
||||
if (r < 0) {
|
||||
if (r != -ENOENT)
|
||||
log_link_warning_errno(link, r, "DHCPv4 CLIENT: Failed to get or parse ID_NET_DHCP_BROADCAST, ignoring: %m");
|
||||
|
||||
return false;
|
||||
}
|
||||
return r == true;
|
||||
|
||||
log_link_debug(link, "DHCPv4 CLIENT: Detected ID_NET_DHCP_BROADCAST='%d'.", r);
|
||||
return r;
|
||||
}
|
||||
|
||||
static bool link_dhcp4_ipv6_only_mode(Link *link) {
|
||||
|
|
|
@ -1293,9 +1293,9 @@ static int link_get_network(Link *link, Network **ret) {
|
|||
}
|
||||
|
||||
log_link_full(link, warn ? LOG_WARNING : LOG_DEBUG,
|
||||
"found matching network '%s'%s.",
|
||||
network->filename,
|
||||
warn ? ", based on potentially unpredictable interface name" : "");
|
||||
"Found matching .network file%s: %s",
|
||||
warn ? ", based on potentially unpredictable interface name" : "",
|
||||
network->filename);
|
||||
|
||||
if (network->unmanaged)
|
||||
return -ENOENT;
|
||||
|
@ -1304,7 +1304,7 @@ static int link_get_network(Link *link, Network **ret) {
|
|||
return 0;
|
||||
}
|
||||
|
||||
return -ENOENT;
|
||||
return log_link_debug_errno(link, SYNTHETIC_ERRNO(ENOENT), "No matching .network found.");
|
||||
}
|
||||
|
||||
int link_reconfigure_impl(Link *link, bool force) {
|
||||
|
|
|
@ -590,6 +590,7 @@ int network_load_one(Manager *manager, OrderedHashmap **networks, const char *fi
|
|||
return log_warning_errno(r, "%s: Failed to store configuration into hashmap: %m", filename);
|
||||
|
||||
TAKE_PTR(network);
|
||||
log_syntax(/* unit = */ NULL, LOG_DEBUG, filename, /* config_line = */ 0, /* error = */ 0, "Successfully loaded.");
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -404,6 +404,10 @@ typedef struct Partition {
|
|||
|
||||
PartitionEncryptedVolume *encrypted_volume;
|
||||
|
||||
char *supplement_for_name;
|
||||
struct Partition *supplement_for, *supplement_target_for;
|
||||
struct Partition *suppressing;
|
||||
|
||||
struct Partition *siblings[_VERITY_MODE_MAX];
|
||||
|
||||
LIST_FIELDS(struct Partition, partitions);
|
||||
|
@ -411,6 +415,7 @@ typedef struct Partition {
|
|||
|
||||
#define PARTITION_IS_FOREIGN(p) (!(p)->definition_path)
|
||||
#define PARTITION_EXISTS(p) (!!(p)->current_partition)
|
||||
#define PARTITION_SUPPRESSED(p) ((p)->supplement_for && (p)->supplement_for->suppressing == (p))
|
||||
|
||||
struct FreeArea {
|
||||
Partition *after;
|
||||
|
@ -520,6 +525,28 @@ static Partition *partition_new(void) {
|
|||
return p;
|
||||
}
|
||||
|
||||
static void partition_unlink_supplement(Partition *p) {
|
||||
assert(p);
|
||||
|
||||
assert(!p->supplement_for || !p->supplement_target_for); /* Can't be both */
|
||||
|
||||
if (p->supplement_target_for) {
|
||||
assert(p->supplement_target_for->supplement_for == p);
|
||||
|
||||
p->supplement_target_for->supplement_for = NULL;
|
||||
}
|
||||
|
||||
if (p->supplement_for) {
|
||||
assert(p->supplement_for->supplement_target_for == p);
|
||||
assert(!p->supplement_for->suppressing || p->supplement_for->suppressing == p);
|
||||
|
||||
p->supplement_for->supplement_target_for = p->supplement_for->suppressing = NULL;
|
||||
}
|
||||
|
||||
p->supplement_for_name = mfree(p->supplement_for_name);
|
||||
p->supplement_target_for = p->supplement_for = p->suppressing = NULL;
|
||||
}
|
||||
|
||||
static Partition* partition_free(Partition *p) {
|
||||
if (!p)
|
||||
return NULL;
|
||||
|
@ -563,6 +590,8 @@ static Partition* partition_free(Partition *p) {
|
|||
|
||||
partition_encrypted_volume_free(p->encrypted_volume);
|
||||
|
||||
partition_unlink_supplement(p);
|
||||
|
||||
return mfree(p);
|
||||
}
|
||||
|
||||
|
@ -608,6 +637,8 @@ static void partition_foreignize(Partition *p) {
|
|||
p->n_mountpoints = 0;
|
||||
|
||||
p->encrypted_volume = partition_encrypted_volume_free(p->encrypted_volume);
|
||||
|
||||
partition_unlink_supplement(p);
|
||||
}
|
||||
|
||||
static bool partition_type_exclude(const GptPartitionType *type) {
|
||||
|
@ -740,6 +771,10 @@ static void partition_drop_or_foreignize(Partition *p) {
|
|||
|
||||
p->dropped = true;
|
||||
p->allocated_to_area = NULL;
|
||||
|
||||
/* If a supplement partition is dropped, we don't want to merge in its settings. */
|
||||
if (PARTITION_SUPPRESSED(p))
|
||||
p->supplement_for->suppressing = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -775,7 +810,7 @@ static bool context_drop_or_foreignize_one_priority(Context *context) {
|
|||
}
|
||||
|
||||
static uint64_t partition_min_size(const Context *context, const Partition *p) {
|
||||
uint64_t sz;
|
||||
uint64_t sz, override_min;
|
||||
|
||||
assert(context);
|
||||
assert(p);
|
||||
|
@ -817,11 +852,13 @@ static uint64_t partition_min_size(const Context *context, const Partition *p) {
|
|||
sz = d;
|
||||
}
|
||||
|
||||
return MAX(round_up_size(p->size_min != UINT64_MAX ? p->size_min : DEFAULT_MIN_SIZE, context->grain_size), sz);
|
||||
override_min = p->suppressing ? MAX(p->size_min, p->suppressing->size_min) : p->size_min;
|
||||
|
||||
return MAX(round_up_size(override_min != UINT64_MAX ? override_min : DEFAULT_MIN_SIZE, context->grain_size), sz);
|
||||
}
|
||||
|
||||
static uint64_t partition_max_size(const Context *context, const Partition *p) {
|
||||
uint64_t sm;
|
||||
uint64_t sm, override_max;
|
||||
|
||||
/* Calculate how large the partition may become at max. This is generally the configured maximum
|
||||
* size, except when it already exists and is larger than that. In that case it's the existing size,
|
||||
|
@ -839,10 +876,11 @@ static uint64_t partition_max_size(const Context *context, const Partition *p) {
|
|||
if (p->verity == VERITY_SIG)
|
||||
return VERITY_SIG_SIZE;
|
||||
|
||||
if (p->size_max == UINT64_MAX)
|
||||
override_max = p->suppressing ? MIN(p->size_max, p->suppressing->size_max) : p->size_max;
|
||||
if (override_max == UINT64_MAX)
|
||||
return UINT64_MAX;
|
||||
|
||||
sm = round_down_size(p->size_max, context->grain_size);
|
||||
sm = round_down_size(override_max, context->grain_size);
|
||||
|
||||
if (p->current_size != UINT64_MAX)
|
||||
sm = MAX(p->current_size, sm);
|
||||
|
@ -851,13 +889,17 @@ static uint64_t partition_max_size(const Context *context, const Partition *p) {
|
|||
}
|
||||
|
||||
static uint64_t partition_min_padding(const Partition *p) {
|
||||
uint64_t override_min;
|
||||
|
||||
assert(p);
|
||||
return p->padding_min != UINT64_MAX ? p->padding_min : 0;
|
||||
|
||||
override_min = p->suppressing ? MAX(p->padding_min, p->suppressing->padding_min) : p->padding_min;
|
||||
return override_min != UINT64_MAX ? override_min : 0;
|
||||
}
|
||||
|
||||
static uint64_t partition_max_padding(const Partition *p) {
|
||||
assert(p);
|
||||
return p->padding_max;
|
||||
return p->suppressing ? MIN(p->padding_max, p->suppressing->padding_max) : p->padding_max;
|
||||
}
|
||||
|
||||
static uint64_t partition_min_size_with_padding(Context *context, const Partition *p) {
|
||||
|
@ -977,14 +1019,22 @@ static bool context_allocate_partitions(Context *context, uint64_t *ret_largest_
|
|||
uint64_t required;
|
||||
FreeArea *a = NULL;
|
||||
|
||||
/* Skip partitions we already dropped or that already exist */
|
||||
if (p->dropped || PARTITION_EXISTS(p))
|
||||
if (p->dropped || PARTITION_IS_FOREIGN(p) || PARTITION_SUPPRESSED(p))
|
||||
continue;
|
||||
|
||||
/* How much do we need to fit? */
|
||||
required = partition_min_size_with_padding(context, p);
|
||||
assert(required % context->grain_size == 0);
|
||||
|
||||
/* For existing partitions, we should verify that they'll actually fit */
|
||||
if (PARTITION_EXISTS(p)) {
|
||||
if (p->current_size + p->current_padding < required)
|
||||
return false; /* 😢 We won't be able to grow to the required min size! */
|
||||
|
||||
continue;
|
||||
}
|
||||
|
||||
/* For new partitions, see if there's a free area big enough */
|
||||
for (size_t i = 0; i < context->n_free_areas; i++) {
|
||||
a = context->free_areas[i];
|
||||
|
||||
|
@ -1007,6 +1057,57 @@ static bool context_allocate_partitions(Context *context, uint64_t *ret_largest_
|
|||
return true;
|
||||
}
|
||||
|
||||
static bool context_unmerge_and_allocate_partitions(Context *context) {
|
||||
assert(context);
|
||||
|
||||
/* This should only be called after plain context_allocate_partitions fails. This algorithm will
|
||||
* try, in the order that minimizes the number of created supplement partitions, all combinations of
|
||||
* un-suppressing supplement partitions until it finds one that works. */
|
||||
|
||||
/* First, let's try to un-suppress just one supplement partition and see if that gets us anywhere */
|
||||
LIST_FOREACH(partitions, p, context->partitions) {
|
||||
Partition *unsuppressed;
|
||||
|
||||
if (!p->suppressing)
|
||||
continue;
|
||||
|
||||
unsuppressed = TAKE_PTR(p->suppressing);
|
||||
|
||||
if (context_allocate_partitions(context, NULL))
|
||||
return true;
|
||||
|
||||
p->suppressing = unsuppressed;
|
||||
}
|
||||
|
||||
/* Looks like not. So we have to un-suppress at least two partitions. We can do this recursively */
|
||||
LIST_FOREACH(partitions, p, context->partitions) {
|
||||
Partition *unsuppressed;
|
||||
|
||||
if (!p->suppressing)
|
||||
continue;
|
||||
|
||||
unsuppressed = TAKE_PTR(p->suppressing);
|
||||
|
||||
if (context_unmerge_and_allocate_partitions(context))
|
||||
return true;
|
||||
|
||||
p->suppressing = unsuppressed;
|
||||
}
|
||||
|
||||
/* No combination of un-suppressed supplements made it possible to fit the partitions */
|
||||
return false;
|
||||
}
|
||||
|
||||
static uint32_t partition_weight(const Partition *p) {
|
||||
assert(p);
|
||||
return p->suppressing ? p->suppressing->weight : p->weight;
|
||||
}
|
||||
|
||||
static uint32_t partition_padding_weight(const Partition *p) {
|
||||
assert(p);
|
||||
return p->suppressing ? p->suppressing->padding_weight : p->padding_weight;
|
||||
}
|
||||
|
||||
static int context_sum_weights(Context *context, FreeArea *a, uint64_t *ret) {
|
||||
uint64_t weight_sum = 0;
|
||||
|
||||
|
@ -1020,13 +1121,11 @@ static int context_sum_weights(Context *context, FreeArea *a, uint64_t *ret) {
|
|||
if (p->padding_area != a && p->allocated_to_area != a)
|
||||
continue;
|
||||
|
||||
if (p->weight > UINT64_MAX - weight_sum)
|
||||
if (!INC_SAFE(&weight_sum, partition_weight(p)))
|
||||
goto overflow_sum;
|
||||
weight_sum += p->weight;
|
||||
|
||||
if (p->padding_weight > UINT64_MAX - weight_sum)
|
||||
if (!INC_SAFE(&weight_sum, partition_padding_weight(p)))
|
||||
goto overflow_sum;
|
||||
weight_sum += p->padding_weight;
|
||||
}
|
||||
|
||||
*ret = weight_sum;
|
||||
|
@ -1091,7 +1190,6 @@ static bool context_grow_partitions_phase(
|
|||
* get any additional room from the left-overs. Similar, if two partitions have the same weight they
|
||||
* should get the same space if possible, even if one has a smaller minimum size than the other. */
|
||||
LIST_FOREACH(partitions, p, context->partitions) {
|
||||
|
||||
/* Look only at partitions associated with this free area, i.e. immediately
|
||||
* preceding it, or allocated into it */
|
||||
if (p->allocated_to_area != a && p->padding_area != a)
|
||||
|
@ -1099,11 +1197,14 @@ static bool context_grow_partitions_phase(
|
|||
|
||||
if (p->new_size == UINT64_MAX) {
|
||||
uint64_t share, rsz, xsz;
|
||||
uint32_t weight;
|
||||
bool charge = false;
|
||||
|
||||
weight = partition_weight(p);
|
||||
|
||||
/* Calculate how much this space this partition needs if everyone would get
|
||||
* the weight based share */
|
||||
share = scale_by_weight(*span, p->weight, *weight_sum);
|
||||
share = scale_by_weight(*span, weight, *weight_sum);
|
||||
|
||||
rsz = partition_min_size(context, p);
|
||||
xsz = partition_max_size(context, p);
|
||||
|
@ -1143,15 +1244,18 @@ static bool context_grow_partitions_phase(
|
|||
|
||||
if (charge) {
|
||||
*span = charge_size(context, *span, p->new_size);
|
||||
*weight_sum = charge_weight(*weight_sum, p->weight);
|
||||
*weight_sum = charge_weight(*weight_sum, weight);
|
||||
}
|
||||
}
|
||||
|
||||
if (p->new_padding == UINT64_MAX) {
|
||||
uint64_t share, rsz, xsz;
|
||||
uint32_t padding_weight;
|
||||
bool charge = false;
|
||||
|
||||
share = scale_by_weight(*span, p->padding_weight, *weight_sum);
|
||||
padding_weight = partition_padding_weight(p);
|
||||
|
||||
share = scale_by_weight(*span, padding_weight, *weight_sum);
|
||||
|
||||
rsz = partition_min_padding(p);
|
||||
xsz = partition_max_padding(p);
|
||||
|
@ -1170,7 +1274,7 @@ static bool context_grow_partitions_phase(
|
|||
|
||||
if (charge) {
|
||||
*span = charge_size(context, *span, p->new_padding);
|
||||
*weight_sum = charge_weight(*weight_sum, p->padding_weight);
|
||||
*weight_sum = charge_weight(*weight_sum, padding_weight);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -2155,7 +2259,9 @@ static int partition_finalize_fstype(Partition *p, const char *path) {
|
|||
|
||||
static bool partition_needs_populate(const Partition *p) {
|
||||
assert(p);
|
||||
return !strv_isempty(p->copy_files) || !strv_isempty(p->make_directories) || !strv_isempty(p->make_symlinks);
|
||||
assert(!p->supplement_for || !p->suppressing); /* Avoid infinite recursion */
|
||||
return !strv_isempty(p->copy_files) || !strv_isempty(p->make_directories) || !strv_isempty(p->make_symlinks) ||
|
||||
(p->suppressing && partition_needs_populate(p->suppressing));
|
||||
}
|
||||
|
||||
static int partition_read_definition(Partition *p, const char *path, const char *const *conf_file_dirs) {
|
||||
|
@ -2196,6 +2302,7 @@ static int partition_read_definition(Partition *p, const char *path, const char
|
|||
{ "Partition", "EncryptedVolume", config_parse_encrypted_volume, 0, p },
|
||||
{ "Partition", "Compression", config_parse_string, CONFIG_PARSE_STRING_SAFE_AND_ASCII, &p->compression },
|
||||
{ "Partition", "CompressionLevel", config_parse_string, CONFIG_PARSE_STRING_SAFE_AND_ASCII, &p->compression_level },
|
||||
{ "Partition", "SupplementFor", config_parse_string, 0, &p->supplement_for_name },
|
||||
{}
|
||||
};
|
||||
_cleanup_free_ char *filename = NULL;
|
||||
|
@ -2320,6 +2427,18 @@ static int partition_read_definition(Partition *p, const char *path, const char
|
|||
return log_syntax(NULL, LOG_ERR, path, 1, SYNTHETIC_ERRNO(EINVAL),
|
||||
"DefaultSubvolume= must be one of the paths in Subvolumes=.");
|
||||
|
||||
if (p->supplement_for_name) {
|
||||
if (!filename_part_is_valid(p->supplement_for_name))
|
||||
return log_syntax(NULL, LOG_ERR, path, 1, SYNTHETIC_ERRNO(EINVAL),
|
||||
"SupplementFor= is an invalid filename: %s",
|
||||
p->supplement_for_name);
|
||||
|
||||
if (p->copy_blocks_path || p->copy_blocks_auto || p->encrypt != ENCRYPT_OFF ||
|
||||
p->verity != VERITY_OFF)
|
||||
return log_syntax(NULL, LOG_ERR, path, 1, SYNTHETIC_ERRNO(EINVAL),
|
||||
"SupplementFor= cannot be combined with CopyBlocks=/Encrypt=/Verity=");
|
||||
}
|
||||
|
||||
/* Verity partitions are read only, let's imply the RO flag hence, unless explicitly configured otherwise. */
|
||||
if ((IN_SET(p->type.designator,
|
||||
PARTITION_ROOT_VERITY,
|
||||
|
@ -2626,6 +2745,58 @@ static int context_copy_from(Context *context) {
|
|||
return 0;
|
||||
}
|
||||
|
||||
static bool check_cross_def_ranges_valid(uint64_t a_min, uint64_t a_max, uint64_t b_min, uint64_t b_max) {
|
||||
if (a_min == UINT64_MAX && b_min == UINT64_MAX)
|
||||
return true;
|
||||
|
||||
if (a_max == UINT64_MAX && b_max == UINT64_MAX)
|
||||
return true;
|
||||
|
||||
return MAX(a_min != UINT64_MAX ? a_min : 0, b_min != UINT64_MAX ? b_min : 0) <= MIN(a_max, b_max);
|
||||
}
|
||||
|
||||
static int supplement_find_target(const Context *context, const Partition *supplement, Partition **ret) {
|
||||
int r;
|
||||
|
||||
assert(context);
|
||||
assert(supplement);
|
||||
assert(ret);
|
||||
|
||||
LIST_FOREACH(partitions, p, context->partitions) {
|
||||
_cleanup_free_ char *filename = NULL;
|
||||
|
||||
if (p == supplement)
|
||||
continue;
|
||||
|
||||
r = path_extract_filename(p->definition_path, &filename);
|
||||
if (r < 0)
|
||||
return log_error_errno(r,
|
||||
"Failed to extract filename from path '%s': %m",
|
||||
p->definition_path);
|
||||
|
||||
*ASSERT_PTR(endswith(filename, ".conf")) = 0; /* Remove the file extension */
|
||||
|
||||
if (!streq(supplement->supplement_for_name, filename))
|
||||
continue;
|
||||
|
||||
if (p->supplement_for_name)
|
||||
return log_syntax(NULL, LOG_ERR, supplement->definition_path, 1, SYNTHETIC_ERRNO(EINVAL),
|
||||
"SupplementFor= target is itself configured as a supplement.");
|
||||
|
||||
if (p->suppressing)
|
||||
return log_syntax(NULL, LOG_ERR, supplement->definition_path, 1, SYNTHETIC_ERRNO(EINVAL),
|
||||
"SupplementFor= target already has a supplement defined: %s",
|
||||
p->suppressing->definition_path);
|
||||
|
||||
*ret = p;
|
||||
return 0;
|
||||
}
|
||||
|
||||
return log_syntax(NULL, LOG_ERR, supplement->definition_path, 1, SYNTHETIC_ERRNO(EINVAL),
|
||||
"Couldn't find target partition for SupplementFor=%s",
|
||||
supplement->supplement_for_name);
|
||||
}
|
||||
|
||||
static int context_read_definitions(Context *context) {
|
||||
_cleanup_strv_free_ char **files = NULL;
|
||||
Partition *last = LIST_FIND_TAIL(partitions, context->partitions);
|
||||
|
@ -2717,7 +2888,33 @@ static int context_read_definitions(Context *context) {
|
|||
if (dp->minimize == MINIMIZE_OFF && !(dp->copy_blocks_path || dp->copy_blocks_auto))
|
||||
return log_syntax(NULL, LOG_ERR, p->definition_path, 1, SYNTHETIC_ERRNO(EINVAL),
|
||||
"Minimize= set for verity hash partition but data partition does not set CopyBlocks= or Minimize=.");
|
||||
}
|
||||
|
||||
LIST_FOREACH(partitions, p, context->partitions) {
|
||||
Partition *tgt = NULL;
|
||||
|
||||
if (!p->supplement_for_name)
|
||||
continue;
|
||||
|
||||
r = supplement_find_target(context, p, &tgt);
|
||||
if (r < 0)
|
||||
return r;
|
||||
|
||||
if (tgt->copy_blocks_path || tgt->copy_blocks_auto || tgt->encrypt != ENCRYPT_OFF ||
|
||||
tgt->verity != VERITY_OFF)
|
||||
return log_syntax(NULL, LOG_ERR, p->definition_path, 1, SYNTHETIC_ERRNO(EINVAL),
|
||||
"SupplementFor= target uses CopyBlocks=/Encrypt=/Verity=");
|
||||
|
||||
if (!check_cross_def_ranges_valid(p->size_min, p->size_max, tgt->size_min, tgt->size_max))
|
||||
return log_syntax(NULL, LOG_ERR, p->definition_path, 1, SYNTHETIC_ERRNO(EINVAL),
|
||||
"SizeMinBytes= larger than SizeMaxBytes= when merged with SupplementFor= target.");
|
||||
|
||||
if (!check_cross_def_ranges_valid(p->padding_min, p->padding_max, tgt->padding_min, tgt->padding_max))
|
||||
return log_syntax(NULL, LOG_ERR, p->definition_path, 1, SYNTHETIC_ERRNO(EINVAL),
|
||||
"PaddingMinBytes= larger than PaddingMaxBytes= when merged with SupplementFor= target.");
|
||||
|
||||
p->supplement_for = tgt;
|
||||
tgt->suppressing = tgt->supplement_target_for = p;
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
@ -3101,6 +3298,10 @@ static int context_load_partition_table(Context *context) {
|
|||
}
|
||||
}
|
||||
|
||||
LIST_FOREACH(partitions, p, context->partitions)
|
||||
if (PARTITION_SUPPRESSED(p) && PARTITION_EXISTS(p))
|
||||
p->supplement_for->suppressing = NULL;
|
||||
|
||||
add_initial_free_area:
|
||||
nsectors = fdisk_get_nsectors(c);
|
||||
assert(nsectors <= UINT64_MAX/secsz);
|
||||
|
@ -3192,6 +3393,11 @@ static void context_unload_partition_table(Context *context) {
|
|||
|
||||
p->current_uuid = SD_ID128_NULL;
|
||||
p->current_label = mfree(p->current_label);
|
||||
|
||||
/* A supplement partition is only ever un-suppressed if the existing partition table prevented
|
||||
* us from suppressing it. So when unloading the partition table, we must re-suppress. */
|
||||
if (p->supplement_for)
|
||||
p->supplement_for->suppressing = p;
|
||||
}
|
||||
|
||||
context->start = UINT64_MAX;
|
||||
|
@ -4969,6 +5175,31 @@ static int add_exclude_path(const char *path, Hashmap **denylist, DenyType type)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int shallow_join_strv(char ***ret, char **a, char **b) {
|
||||
_cleanup_free_ char **joined = NULL;
|
||||
char **iter;
|
||||
|
||||
assert(ret);
|
||||
|
||||
joined = new(char*, strv_length(a) + strv_length(b) + 1);
|
||||
if (!joined)
|
||||
return log_oom();
|
||||
|
||||
iter = joined;
|
||||
|
||||
STRV_FOREACH(i, a)
|
||||
*(iter++) = *i;
|
||||
|
||||
STRV_FOREACH(i, b)
|
||||
if (!strv_contains(joined, *i))
|
||||
*(iter++) = *i;
|
||||
|
||||
*iter = NULL;
|
||||
|
||||
*ret = TAKE_PTR(joined);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int make_copy_files_denylist(
|
||||
Context *context,
|
||||
const Partition *p,
|
||||
|
@ -4977,6 +5208,7 @@ static int make_copy_files_denylist(
|
|||
Hashmap **ret) {
|
||||
|
||||
_cleanup_hashmap_free_ Hashmap *denylist = NULL;
|
||||
_cleanup_free_ char **override_exclude_src = NULL, **override_exclude_tgt = NULL;
|
||||
int r;
|
||||
|
||||
assert(context);
|
||||
|
@ -4996,13 +5228,26 @@ static int make_copy_files_denylist(
|
|||
|
||||
/* Add the user configured excludes. */
|
||||
|
||||
STRV_FOREACH(e, p->exclude_files_source) {
|
||||
if (p->suppressing) {
|
||||
r = shallow_join_strv(&override_exclude_src,
|
||||
p->exclude_files_source,
|
||||
p->suppressing->exclude_files_source);
|
||||
if (r < 0)
|
||||
return r;
|
||||
r = shallow_join_strv(&override_exclude_tgt,
|
||||
p->exclude_files_target,
|
||||
p->suppressing->exclude_files_target);
|
||||
if (r < 0)
|
||||
return r;
|
||||
}
|
||||
|
||||
STRV_FOREACH(e, override_exclude_src ?: p->exclude_files_source) {
|
||||
r = add_exclude_path(*e, &denylist, endswith(*e, "/") ? DENY_CONTENTS : DENY_INODE);
|
||||
if (r < 0)
|
||||
return r;
|
||||
}
|
||||
|
||||
STRV_FOREACH(e, p->exclude_files_target) {
|
||||
STRV_FOREACH(e, override_exclude_tgt ?: p->exclude_files_target) {
|
||||
_cleanup_free_ char *path = NULL;
|
||||
|
||||
const char *s = path_startswith(*e, target);
|
||||
|
@ -5096,6 +5341,7 @@ static int add_subvolume_path(const char *path, Set **subvolumes) {
|
|||
static int make_subvolumes_strv(const Partition *p, char ***ret) {
|
||||
_cleanup_strv_free_ char **subvolumes = NULL;
|
||||
Subvolume *subvolume;
|
||||
int r;
|
||||
|
||||
assert(p);
|
||||
assert(ret);
|
||||
|
@ -5104,6 +5350,18 @@ static int make_subvolumes_strv(const Partition *p, char ***ret) {
|
|||
if (strv_extend(&subvolumes, subvolume->path) < 0)
|
||||
return log_oom();
|
||||
|
||||
if (p->suppressing) {
|
||||
_cleanup_strv_free_ char **suppressing = NULL;
|
||||
|
||||
r = make_subvolumes_strv(p->suppressing, &suppressing);
|
||||
if (r < 0)
|
||||
return r;
|
||||
|
||||
r = strv_extend_strv(&subvolumes, suppressing, /* filter_duplicates= */ true);
|
||||
if (r < 0)
|
||||
return log_oom();
|
||||
}
|
||||
|
||||
*ret = TAKE_PTR(subvolumes);
|
||||
return 0;
|
||||
}
|
||||
|
@ -5114,18 +5372,22 @@ static int make_subvolumes_set(
|
|||
const char *target,
|
||||
Set **ret) {
|
||||
|
||||
_cleanup_strv_free_ char **paths = NULL;
|
||||
_cleanup_set_free_ Set *subvolumes = NULL;
|
||||
Subvolume *subvolume;
|
||||
int r;
|
||||
|
||||
assert(p);
|
||||
assert(target);
|
||||
assert(ret);
|
||||
|
||||
ORDERED_HASHMAP_FOREACH(subvolume, p->subvolumes) {
|
||||
r = make_subvolumes_strv(p, &paths);
|
||||
if (r < 0)
|
||||
return r;
|
||||
|
||||
STRV_FOREACH(subvolume, paths) {
|
||||
_cleanup_free_ char *path = NULL;
|
||||
|
||||
const char *s = path_startswith(subvolume->path, target);
|
||||
const char *s = path_startswith(*subvolume, target);
|
||||
if (!s)
|
||||
continue;
|
||||
|
||||
|
@ -5168,6 +5430,7 @@ static usec_t epoch_or_infinity(void) {
|
|||
|
||||
static int do_copy_files(Context *context, Partition *p, const char *root) {
|
||||
_cleanup_strv_free_ char **subvolumes = NULL;
|
||||
_cleanup_free_ char **override_copy_files = NULL;
|
||||
int r;
|
||||
|
||||
assert(p);
|
||||
|
@ -5177,11 +5440,17 @@ static int do_copy_files(Context *context, Partition *p, const char *root) {
|
|||
if (r < 0)
|
||||
return r;
|
||||
|
||||
if (p->suppressing) {
|
||||
r = shallow_join_strv(&override_copy_files, p->copy_files, p->suppressing->copy_files);
|
||||
if (r < 0)
|
||||
return r;
|
||||
}
|
||||
|
||||
/* copy_tree_at() automatically copies the permissions of source directories to target directories if
|
||||
* it created them. However, the root directory is created by us, so we have to manually take care
|
||||
* that it is initialized. We use the first source directory targeting "/" as the metadata source for
|
||||
* the root directory. */
|
||||
STRV_FOREACH_PAIR(source, target, p->copy_files) {
|
||||
STRV_FOREACH_PAIR(source, target, override_copy_files ?: p->copy_files) {
|
||||
_cleanup_close_ int rfd = -EBADF, sfd = -EBADF;
|
||||
|
||||
if (!path_equal(*target, "/"))
|
||||
|
@ -5202,7 +5471,7 @@ static int do_copy_files(Context *context, Partition *p, const char *root) {
|
|||
break;
|
||||
}
|
||||
|
||||
STRV_FOREACH_PAIR(source, target, p->copy_files) {
|
||||
STRV_FOREACH_PAIR(source, target, override_copy_files ?: p->copy_files) {
|
||||
_cleanup_hashmap_free_ Hashmap *denylist = NULL;
|
||||
_cleanup_set_free_ Set *subvolumes_by_source_inode = NULL;
|
||||
_cleanup_close_ int sfd = -EBADF, pfd = -EBADF, tfd = -EBADF;
|
||||
|
@ -5320,6 +5589,7 @@ static int do_copy_files(Context *context, Partition *p, const char *root) {
|
|||
|
||||
static int do_make_directories(Partition *p, const char *root) {
|
||||
_cleanup_strv_free_ char **subvolumes = NULL;
|
||||
_cleanup_free_ char **override_dirs = NULL;
|
||||
int r;
|
||||
|
||||
assert(p);
|
||||
|
@ -5329,7 +5599,13 @@ static int do_make_directories(Partition *p, const char *root) {
|
|||
if (r < 0)
|
||||
return r;
|
||||
|
||||
STRV_FOREACH(d, p->make_directories) {
|
||||
if (p->suppressing) {
|
||||
r = shallow_join_strv(&override_dirs, p->make_directories, p->suppressing->make_directories);
|
||||
if (r < 0)
|
||||
return r;
|
||||
}
|
||||
|
||||
STRV_FOREACH(d, override_dirs ?: p->make_directories) {
|
||||
r = mkdir_p_root_full(root, *d, UID_INVALID, GID_INVALID, 0755, epoch_or_infinity(), subvolumes);
|
||||
if (r < 0)
|
||||
return log_error_errno(r, "Failed to create directory '%s' in file system: %m", *d);
|
||||
|
@ -5377,6 +5653,12 @@ static int make_subvolumes_read_only(Partition *p, const char *root) {
|
|||
return log_error_errno(r, "Failed to make subvolume '%s' read-only: %m", subvolume->path);
|
||||
}
|
||||
|
||||
if (p->suppressing) {
|
||||
r = make_subvolumes_read_only(p->suppressing, root);
|
||||
if (r < 0)
|
||||
return r;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -5496,6 +5778,38 @@ static int partition_populate_filesystem(Context *context, Partition *p, const c
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int append_btrfs_subvols(char ***l, OrderedHashmap *subvolumes, const char *default_subvolume) {
|
||||
Subvolume *subvolume;
|
||||
int r;
|
||||
|
||||
assert(l);
|
||||
|
||||
ORDERED_HASHMAP_FOREACH(subvolume, subvolumes) {
|
||||
_cleanup_free_ char *s = NULL, *f = NULL;
|
||||
|
||||
s = strdup(subvolume->path);
|
||||
if (!s)
|
||||
return log_oom();
|
||||
|
||||
f = subvolume_flags_to_string(subvolume->flags);
|
||||
if (!f)
|
||||
return log_oom();
|
||||
|
||||
if (streq_ptr(subvolume->path, default_subvolume) &&
|
||||
!strextend_with_separator(&f, ",", "default"))
|
||||
return log_oom();
|
||||
|
||||
if (!isempty(f) && !strextend_with_separator(&s, ":", f))
|
||||
return log_oom();
|
||||
|
||||
r = strv_extend_many(l, "--subvol", s);
|
||||
if (r < 0)
|
||||
return log_oom();
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int finalize_extra_mkfs_options(const Partition *p, const char *root, char ***ret) {
|
||||
_cleanup_strv_free_ char **sv = NULL;
|
||||
int r;
|
||||
|
@ -5510,28 +5824,14 @@ static int finalize_extra_mkfs_options(const Partition *p, const char *root, cha
|
|||
p->format);
|
||||
|
||||
if (partition_needs_populate(p) && root && streq(p->format, "btrfs")) {
|
||||
Subvolume *subvolume;
|
||||
|
||||
ORDERED_HASHMAP_FOREACH(subvolume, p->subvolumes) {
|
||||
_cleanup_free_ char *s = NULL, *f = NULL;
|
||||
|
||||
s = strdup(subvolume->path);
|
||||
if (!s)
|
||||
return log_oom();
|
||||
|
||||
f = subvolume_flags_to_string(subvolume->flags);
|
||||
if (!f)
|
||||
return log_oom();
|
||||
|
||||
if (streq_ptr(subvolume->path, p->default_subvolume) && !strextend_with_separator(&f, ",", "default"))
|
||||
return log_oom();
|
||||
|
||||
if (!isempty(f) && !strextend_with_separator(&s, ":", f))
|
||||
return log_oom();
|
||||
|
||||
r = strv_extend_many(&sv, "--subvol", s);
|
||||
r = append_btrfs_subvols(&sv, p->subvolumes, p->default_subvolume);
|
||||
if (r < 0)
|
||||
return log_oom();
|
||||
return r;
|
||||
|
||||
if (p->suppressing) {
|
||||
r = append_btrfs_subvols(&sv, p->suppressing->subvolumes, NULL);
|
||||
if (r < 0)
|
||||
return r;
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -8524,7 +8824,7 @@ static int determine_auto_size(Context *c) {
|
|||
LIST_FOREACH(partitions, p, c->partitions) {
|
||||
uint64_t m;
|
||||
|
||||
if (p->dropped)
|
||||
if (p->dropped || PARTITION_SUPPRESSED(p))
|
||||
continue;
|
||||
|
||||
m = partition_min_size_with_padding(c, p);
|
||||
|
@ -8756,13 +9056,36 @@ static int run(int argc, char *argv[]) {
|
|||
if (context_allocate_partitions(context, &largest_free_area))
|
||||
break; /* Success! */
|
||||
|
||||
if (!context_drop_or_foreignize_one_priority(context)) {
|
||||
if (context_unmerge_and_allocate_partitions(context))
|
||||
break; /* We had to un-suppress a supplement or few, but still success! */
|
||||
|
||||
if (context_drop_or_foreignize_one_priority(context))
|
||||
continue; /* Still no luck. Let's drop a priority and try again. */
|
||||
|
||||
/* No more priorities left to drop. This configuration just doesn't fit on this disk... */
|
||||
r = log_error_errno(SYNTHETIC_ERRNO(ENOSPC),
|
||||
"Can't fit requested partitions into available free space (%s), refusing.",
|
||||
FORMAT_BYTES(largest_free_area));
|
||||
determine_auto_size(context);
|
||||
return r;
|
||||
}
|
||||
|
||||
LIST_FOREACH(partitions, p, context->partitions) {
|
||||
if (!p->supplement_for)
|
||||
continue;
|
||||
|
||||
if (PARTITION_SUPPRESSED(p)) {
|
||||
assert(!p->allocated_to_area);
|
||||
p->dropped = true;
|
||||
|
||||
log_debug("Partition %s can be merged into %s, suppressing supplement.",
|
||||
p->definition_path, p->supplement_for->definition_path);
|
||||
} else if (PARTITION_EXISTS(p))
|
||||
log_info("Partition %s already exists on disk, using supplement verbatim.",
|
||||
p->definition_path);
|
||||
else
|
||||
log_info("Couldn't allocate partitions with %s merged into %s, using supplement verbatim.",
|
||||
p->definition_path, p->supplement_for->definition_path);
|
||||
}
|
||||
|
||||
/* Now assign free space according to the weight logic */
|
||||
|
|
|
@ -465,10 +465,6 @@ int hashmap_put_stats_by_path(Hashmap **stats_by_path, const char *path, const s
|
|||
assert(path);
|
||||
assert(st);
|
||||
|
||||
r = hashmap_ensure_allocated(stats_by_path, &path_hash_ops_free_free);
|
||||
if (r < 0)
|
||||
return r;
|
||||
|
||||
st_copy = newdup(struct stat, st, 1);
|
||||
if (!st_copy)
|
||||
return -ENOMEM;
|
||||
|
@ -477,7 +473,7 @@ int hashmap_put_stats_by_path(Hashmap **stats_by_path, const char *path, const s
|
|||
if (!path_copy)
|
||||
return -ENOMEM;
|
||||
|
||||
r = hashmap_put(*stats_by_path, path_copy, st_copy);
|
||||
r = hashmap_ensure_put(stats_by_path, &path_hash_ops_free_free, path_copy, st_copy);
|
||||
if (r < 0)
|
||||
return r;
|
||||
|
||||
|
@ -502,12 +498,12 @@ static int config_parse_many_files(
|
|||
_cleanup_ordered_hashmap_free_ OrderedHashmap *dropins = NULL;
|
||||
_cleanup_set_free_ Set *inodes = NULL;
|
||||
struct stat st;
|
||||
int r;
|
||||
int r, level = FLAGS_SET(flags, CONFIG_PARSE_WARN) ? LOG_WARNING : LOG_DEBUG;
|
||||
|
||||
if (ret_stats_by_path) {
|
||||
stats_by_path = hashmap_new(&path_hash_ops_free_free);
|
||||
if (!stats_by_path)
|
||||
return -ENOMEM;
|
||||
return log_oom_full(level);
|
||||
}
|
||||
|
||||
STRV_FOREACH(fn, files) {
|
||||
|
@ -518,14 +514,14 @@ static int config_parse_many_files(
|
|||
if (r == -ENOENT)
|
||||
continue;
|
||||
if (r < 0)
|
||||
return r;
|
||||
return log_full_errno(level, r, "Failed to open %s: %m", *fn);
|
||||
|
||||
int fd = fileno(f);
|
||||
|
||||
r = ordered_hashmap_ensure_put(&dropins, &config_file_hash_ops_fclose, *fn, f);
|
||||
if (r < 0) {
|
||||
assert(r != -EEXIST);
|
||||
return r;
|
||||
assert(r == -ENOMEM);
|
||||
return log_oom_full(level);
|
||||
}
|
||||
assert(r > 0);
|
||||
TAKE_PTR(f);
|
||||
|
@ -535,14 +531,14 @@ static int config_parse_many_files(
|
|||
|
||||
_cleanup_free_ struct stat *st_dropin = new(struct stat, 1);
|
||||
if (!st_dropin)
|
||||
return -ENOMEM;
|
||||
return log_oom_full(level);
|
||||
|
||||
if (fstat(fd, st_dropin) < 0)
|
||||
return -errno;
|
||||
return log_full_errno(level, errno, "Failed to stat %s: %m", *fn);
|
||||
|
||||
r = set_ensure_consume(&inodes, &inode_hash_ops, TAKE_PTR(st_dropin));
|
||||
if (r < 0)
|
||||
return r;
|
||||
return log_oom_full(level);
|
||||
}
|
||||
|
||||
/* First read the first found main config file. */
|
||||
|
@ -553,11 +549,11 @@ static int config_parse_many_files(
|
|||
if (r == -ENOENT)
|
||||
continue;
|
||||
if (r < 0)
|
||||
return r;
|
||||
return log_full_errno(level, r, "Failed to open %s: %m", *fn);
|
||||
|
||||
if (inodes) {
|
||||
if (fstat(fileno(f), &st) < 0)
|
||||
return -errno;
|
||||
return log_full_errno(level, errno, "Failed to stat %s: %m", *fn);
|
||||
|
||||
if (set_contains(inodes, &st)) {
|
||||
log_debug("%s: symlink to/symlinked as drop-in, will be read later.", *fn);
|
||||
|
@ -567,13 +563,13 @@ static int config_parse_many_files(
|
|||
|
||||
r = config_parse(/* unit= */ NULL, *fn, f, sections, lookup, table, flags, userdata, &st);
|
||||
if (r < 0)
|
||||
return r;
|
||||
return r; /* config_parse() logs internally. */
|
||||
assert(r > 0);
|
||||
|
||||
if (ret_stats_by_path) {
|
||||
r = hashmap_put_stats_by_path(&stats_by_path, *fn, &st);
|
||||
if (r < 0)
|
||||
return r;
|
||||
return log_full_errno(level, r, "Failed to save stats of %s: %m", *fn);
|
||||
}
|
||||
|
||||
break;
|
||||
|
@ -586,13 +582,13 @@ static int config_parse_many_files(
|
|||
ORDERED_HASHMAP_FOREACH_KEY(f_dropin, path_dropin, dropins) {
|
||||
r = config_parse(/* unit= */ NULL, path_dropin, f_dropin, sections, lookup, table, flags, userdata, &st);
|
||||
if (r < 0)
|
||||
return r;
|
||||
return r; /* config_parse() logs internally. */
|
||||
assert(r > 0);
|
||||
|
||||
if (ret_stats_by_path) {
|
||||
r = hashmap_put_stats_by_path(&stats_by_path, path_dropin, &st);
|
||||
if (r < 0)
|
||||
return r;
|
||||
return log_full_errno(level, r, "Failed to save stats of %s: %m", path_dropin);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -625,11 +621,12 @@ int config_parse_many(
|
|||
|
||||
r = conf_files_list_dropins(&files, dropin_dirname, root, conf_file_dirs);
|
||||
if (r < 0)
|
||||
return r;
|
||||
return log_full_errno(FLAGS_SET(flags, CONFIG_PARSE_WARN) ? LOG_WARNING : LOG_DEBUG, r,
|
||||
"Failed to list up drop-in configs in %s: %m", dropin_dirname);
|
||||
|
||||
r = config_parse_many_files(root, conf_files, files, sections, lookup, table, flags, userdata, ret_stats_by_path);
|
||||
if (r < 0)
|
||||
return r;
|
||||
return r; /* config_parse_many_files() logs internally. */
|
||||
|
||||
if (ret_dropin_files)
|
||||
*ret_dropin_files = TAKE_PTR(files);
|
||||
|
@ -650,22 +647,16 @@ int config_parse_standard_file_with_dropins_full(
|
|||
|
||||
const char* const *conf_paths = (const char* const*) CONF_PATHS_STRV("");
|
||||
_cleanup_strv_free_ char **configs = NULL;
|
||||
int r;
|
||||
int r, level = FLAGS_SET(flags, CONFIG_PARSE_WARN) ? LOG_WARNING : LOG_DEBUG;
|
||||
|
||||
/* Build the list of main config files */
|
||||
r = strv_extend_strv_biconcat(&configs, root, conf_paths, main_file);
|
||||
if (r < 0) {
|
||||
if (flags & CONFIG_PARSE_WARN)
|
||||
log_oom();
|
||||
return r;
|
||||
}
|
||||
if (r < 0)
|
||||
return log_oom_full(level);
|
||||
|
||||
_cleanup_free_ char *dropin_dirname = strjoin(main_file, ".d");
|
||||
if (!dropin_dirname) {
|
||||
if (flags & CONFIG_PARSE_WARN)
|
||||
log_oom();
|
||||
return -ENOMEM;
|
||||
}
|
||||
if (!dropin_dirname)
|
||||
return log_oom_full(level);
|
||||
|
||||
return config_parse_many(
|
||||
(const char* const*) configs,
|
||||
|
|
|
@ -526,9 +526,6 @@ def call_systemd_measure(uki, linux, opts):
|
|||
|
||||
# First, pick up the sections we shall measure now */
|
||||
for s in uki.sections:
|
||||
|
||||
print(s)
|
||||
|
||||
if not s.measure:
|
||||
continue
|
||||
|
||||
|
|
|
@ -0,0 +1,6 @@
|
|||
# SPDX-License-Identifier: LGPL-2.1-or-later
|
||||
|
||||
all setup run clean clean-again:
|
||||
true
|
||||
|
||||
.PHONY: all setup run clean clean-again
|
|
@ -0,0 +1,10 @@
|
|||
# SPDX-License-Identifier: LGPL-2.1-or-later
|
||||
|
||||
integration_tests += [
|
||||
integration_test_template + {
|
||||
'name' : fs.name(meson.current_source_dir()),
|
||||
'storage' : 'persistent',
|
||||
'vm' : true,
|
||||
'firmware' : 'auto',
|
||||
},
|
||||
]
|
|
@ -0,0 +1,10 @@
|
|||
#!/usr/bin/env bash
|
||||
# SPDX-License-Identifier: LGPL-2.1-or-later
|
||||
set -e
|
||||
|
||||
TEST_DESCRIPTION="Test Multi-Profile UKI Boots"
|
||||
|
||||
# shellcheck source=test/test-functions
|
||||
. "${TEST_BASE_DIR:?}/test-functions"
|
||||
|
||||
do_test "$@"
|
|
@ -376,6 +376,7 @@ foreach dirname : [
|
|||
'TEST-83-BTRFS',
|
||||
'TEST-84-STORAGETM',
|
||||
'TEST-85-NETWORK',
|
||||
'TEST-86-MULTI-PROFILE-UKI',
|
||||
]
|
||||
subdir(dirname)
|
||||
endforeach
|
||||
|
|
|
@ -29,6 +29,9 @@ if ! systemd-detect-virt --quiet --container; then
|
|||
udevadm control --log-level debug
|
||||
fi
|
||||
|
||||
esp_guid=C12A7328-F81F-11D2-BA4B-00A0C93EC93B
|
||||
xbootldr_guid=BC13C2FF-59E6-4262-A352-B275FD6F7172
|
||||
|
||||
machine="$(uname -m)"
|
||||
if [ "${machine}" = "x86_64" ]; then
|
||||
root_guid=4F68BCE3-E8CD-4DB1-96E7-FBCAF984B709
|
||||
|
@ -1432,6 +1435,82 @@ EOF
|
|||
systemd-dissect -U "$imgs/mnt"
|
||||
}
|
||||
|
||||
testcase_fallback_partitions() {
|
||||
local workdir image defs
|
||||
|
||||
workdir="$(mktemp --directory "/tmp/test-repart.fallback.XXXXXXXXXX")"
|
||||
# shellcheck disable=SC2064
|
||||
trap "rm -rf '${workdir:?}'" RETURN
|
||||
|
||||
image="$workdir/image.img"
|
||||
defs="$workdir/defs"
|
||||
mkdir "$defs"
|
||||
|
||||
tee "$defs/10-esp.conf" <<EOF
|
||||
[Partition]
|
||||
Type=esp
|
||||
Format=vfat
|
||||
SizeMinBytes=10M
|
||||
EOF
|
||||
|
||||
tee "$defs/20-xbootldr.conf" <<EOF
|
||||
[Partition]
|
||||
Type=xbootldr
|
||||
Format=vfat
|
||||
SizeMinBytes=100M
|
||||
SupplementFor=10-esp
|
||||
EOF
|
||||
|
||||
# Blank disk => big ESP should be created
|
||||
|
||||
systemd-repart --empty=create --size=auto --dry-run=no --definitions="$defs" "$image"
|
||||
|
||||
output=$(sfdisk -d "$image")
|
||||
assert_in "${image}1 : start= 2048, size= 204800, type=${esp_guid}" "$output"
|
||||
assert_not_in "${image}2" "$output"
|
||||
|
||||
# Disk with small ESP => ESP grows
|
||||
|
||||
sfdisk "$image" <<EOF
|
||||
label: gpt
|
||||
size=10M, type=${esp_guid}
|
||||
EOF
|
||||
|
||||
systemd-repart --dry-run=no --definitions="$defs" "$image"
|
||||
|
||||
output=$(sfdisk -d "$image")
|
||||
assert_in "${image}1 : start= 2048, size= 204800, type=${esp_guid}" "$output"
|
||||
assert_not_in "${image}2" "$output"
|
||||
|
||||
# Disk with small ESP that can't grow => XBOOTLDR created
|
||||
|
||||
truncate -s 150M "$image"
|
||||
sfdisk "$image" <<EOF
|
||||
label: gpt
|
||||
size=10M, type=${esp_guid},
|
||||
size=10M, type=${root_guid},
|
||||
EOF
|
||||
|
||||
systemd-repart --dry-run=no --definitions="$defs" "$image"
|
||||
|
||||
output=$(sfdisk -d "$image")
|
||||
assert_in "${image}1 : start= 2048, size= 20480, type=${esp_guid}" "$output"
|
||||
assert_in "${image}3 : start= 43008, size= 264152, type=${xbootldr_guid}" "$output"
|
||||
|
||||
# Disk with existing XBOOTLDR partition => XBOOTLDR grows, small ESP created
|
||||
|
||||
sfdisk "$image" <<EOF
|
||||
label: gpt
|
||||
size=10M, type=${xbootldr_guid},
|
||||
EOF
|
||||
|
||||
systemd-repart --dry-run=no --definitions="$defs" "$image"
|
||||
|
||||
output=$(sfdisk -d "$image")
|
||||
assert_in "${image}1 : start= 2048, size= 204800, type=${xbootldr_guid}" "$output"
|
||||
assert_in "${image}2 : start= 206848, size= 100312, type=${esp_guid}" "$output"
|
||||
}
|
||||
|
||||
OFFLINE="yes"
|
||||
run_testcases
|
||||
|
||||
|
|
|
@ -0,0 +1,81 @@
|
|||
#!/usr/bin/env bash
|
||||
# SPDX-License-Identifier: LGPL-2.1-or-later
|
||||
set -eux
|
||||
set -o pipefail
|
||||
|
||||
export SYSTEMD_LOG_LEVEL=debug
|
||||
|
||||
bootctl
|
||||
|
||||
CURRENT_UKI=$(bootctl --print-stub-path)
|
||||
|
||||
echo "CURRENT UKI ($CURRENT_UKI):"
|
||||
ukify inspect "$CURRENT_UKI"
|
||||
if test -f /run/systemd/stub/profile; then
|
||||
echo "CURRENT PROFILE:"
|
||||
cat /run/systemd/stub/profile
|
||||
fi
|
||||
echo "CURRENT MEASUREMENT:"
|
||||
/usr/lib/systemd/systemd-measure --current
|
||||
if test -f /run/systemd/tpm2-pcr-signature.json ; then
|
||||
echo "CURRENT SIGNATURE:"
|
||||
jq < /run/systemd/tpm2-pcr-signature.json
|
||||
fi
|
||||
|
||||
echo "CURRENT EVENT LOG + PCRS:"
|
||||
/usr/lib/systemd/systemd-pcrlock
|
||||
|
||||
if test ! -f /run/systemd/stub/profile; then
|
||||
openssl genpkey -algorithm RSA -pkeyopt rsa_keygen_bits:2048 -out /root/pcrsign.private.pem
|
||||
openssl rsa -pubout -in /root/pcrsign.private.pem -out /root/pcrsign.public.pem
|
||||
|
||||
ukify build --extend="$CURRENT_UKI" --output=/tmp/extended0.efi --profile='ID=profile0
|
||||
TITLE="Profile Zero"' --measure-base="$CURRENT_UKI" --pcr-private-key=/root/pcrsign.private.pem --pcr-public-key=/root/pcrsign.public.pem --pcr-banks=sha256,sha384,sha512
|
||||
|
||||
ukify build --extend=/tmp/extended0.efi --output=/tmp/extended1.efi --profile='ID=profile1
|
||||
TITLE="Profile One"' --measure-base=/tmp/extended0.efi --cmdline="testprofile1=1 $(cat /proc/cmdline)" --pcr-private-key=/root/pcrsign.private.pem --pcr-public-key=/root/pcrsign.public.pem --pcr-banks=sha256,sha384,sha512
|
||||
|
||||
ukify build --extend=/tmp/extended1.efi --output=/tmp/extended2.efi --profile='ID=profile2
|
||||
TITLE="Profile Two"' --measure-base=/tmp/extended1.efi --cmdline="testprofile2=1 $(cat /proc/cmdline)" --pcr-private-key=/root/pcrsign.private.pem --pcr-public-key=/root/pcrsign.public.pem --pcr-banks=sha256,sha384,sha512
|
||||
|
||||
echo "EXTENDED UKI:"
|
||||
ukify inspect /tmp/extended2.efi
|
||||
rm /tmp/extended0.efi /tmp/extended1.efi
|
||||
mv /tmp/extended2.efi "$CURRENT_UKI"
|
||||
|
||||
# Prepare a disk image, locked to the PCR measurements of the UKI we just generated
|
||||
truncate -s 32M /root/encrypted.raw
|
||||
echo -n "geheim" > /root/encrypted.secret
|
||||
cryptsetup luksFormat -q --pbkdf pbkdf2 --pbkdf-force-iterations 1000 --use-urandom /root/encrypted.raw --key-file=/root/encrypted.secret
|
||||
systemd-cryptenroll --tpm2-device=auto --tpm2-pcrs= --tpm2-public-key=/root/pcrsign.public.pem --unlock-key-file=/root/encrypted.secret /root/encrypted.raw
|
||||
rm -f /root/encrypted.secret
|
||||
|
||||
reboot
|
||||
exit 0
|
||||
else
|
||||
# shellcheck source=/dev/null
|
||||
. /run/systemd/stub/profile
|
||||
|
||||
# Validate that with the current profile we can fulfill the PCR 11 policy
|
||||
systemd-cryptsetup attach multiprof /root/encrypted.raw - tpm2-device=auto,headless=1
|
||||
systemd-cryptsetup detach multiprof
|
||||
|
||||
if [ "$ID" = "profile0" ]; then
|
||||
grep -v testprofile /proc/cmdline
|
||||
echo "default $(basename "$CURRENT_UKI")@profile1" > "$(bootctl -p)/loader/loader.conf"
|
||||
reboot
|
||||
exit 0
|
||||
elif [ "$ID" = "profile1" ]; then
|
||||
grep testprofile1=1 /proc/cmdline
|
||||
echo "default $(basename "$CURRENT_UKI")@profile2" > "$(bootctl -p)/loader/loader.conf"
|
||||
reboot
|
||||
exit 0
|
||||
elif [ "$ID" = "profile2" ]; then
|
||||
grep testprofile2=1 /proc/cmdline
|
||||
rm /root/encrypted.raw
|
||||
else
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
touch /testok
|
|
@ -19,5 +19,5 @@ Q /var/lib/machines 0700 - - -
|
|||
# systemd-nspawn --ephemeral places snapshots) we are more strict, to
|
||||
# avoid removing unrelated temporary files.
|
||||
|
||||
R!$ /var/lib/machines/.#*
|
||||
R!$ /.#machine.*
|
||||
R! /var/lib/machines/.#*
|
||||
R! /.#machine.*
|
||||
|
|
|
@ -14,10 +14,10 @@ x /var/tmp/systemd-private-%b-*
|
|||
X /var/tmp/systemd-private-%b-*/tmp
|
||||
|
||||
# Remove top-level private temporary directories on each boot
|
||||
R!$ /tmp/systemd-private-*
|
||||
R!$ /var/tmp/systemd-private-*
|
||||
R! /tmp/systemd-private-*
|
||||
R! /var/tmp/systemd-private-*
|
||||
|
||||
# Handle lost systemd-coredump temp files. They could be lost on old filesystems,
|
||||
# for example, after hard reboot.
|
||||
x /var/lib/systemd/coredump/.#core*.%b*
|
||||
r!$ /var/lib/systemd/coredump/.#*
|
||||
r! /var/lib/systemd/coredump/.#*
|
||||
|
|
Loading…
Reference in New Issue