Compare commits

...

7 Commits

Author SHA1 Message Date
Anita Zhang 0419dae715
Merge pull request #16885 from keszybz/rework-cache-timestamps
Rework cache timestamps
2020-08-31 23:21:12 -07:00
Anita Zhang 026a72c175
Merge pull request #16917 from poettering/contrib-rfe
CONTRIBUTING: be clearer about versions and RFE process
2020-08-31 17:25:36 -07:00
Lennart Poettering c4bc2e9343 CONTRIBUTING: be clearer about versions and RFE process
Fixes: #16550
2020-08-31 23:23:56 +02:00
Zbigniew Jędrzejewski-Szmek c2911d48ff Rework how we cache mtime to figure out if units changed
Instead of assuming that more-recently modified directories have higher mtime,
just look for any mtime changes, up or down. Since we don't want to remember
individual mtimes, hash them to obtain a single value.

This should help us behave properly in the case when the time jumps backwards
during boot: various files might have mtimes that in the future, but we won't
care. This fixes the following scenario:

We have /etc/systemd/system with T1. T1 is initially far in the past.
We have /run/systemd/generator with time T2.
The time is adjusted backwards, so T2 will be always in the future for a while.
Now the user writes new files to /etc/systemd/system, and T1 is updated to T1'.
Nevertheless, T1 < T1' << T2.
We would consider our cache to be up-to-date, falsely.
2020-08-31 20:53:38 +02:00
Zbigniew Jędrzejewski-Szmek 02103e5716 core: always try to reload not-found unit
This check was added in d904afc730. It would only
apply in the case where the cache hasn't been loaded yet. I think we pretty
much always have the cache loaded when we reach this point, but even if we
didn't, it seems better to try to reload the unit. So let's drop this check.
2020-08-31 20:53:38 +02:00
Zbigniew Jędrzejewski-Szmek c149d2b491 pid1: use the cache mtime not clock to "mark" load attempts
We really only care if the cache has been reloaded between the time when we
last attempted to load this unit and now. So instead of recording the actual
time we try to load the unit, just store the timestamp of the cache. This has
the advantage that we'll notice if the cache mtime jumps forward or backward.

Also rename fragment_loadtime to fragment_not_found_time. It only gets set when
we failed to load the unit and the old name was suggesting it is always set.

In https://bugzilla.redhat.com/show_bug.cgi?id=1871327
(and most likely https://bugzilla.redhat.com/show_bug.cgi?id=1867930
and most likely https://bugzilla.redhat.com/show_bug.cgi?id=1872068) we try
to load a non-existent unit over and over from transaction_add_job_and_dependencies().
My understanding is that the clock was in the future during inital boot,
so cache_mtime is always in the future (since we don't touch the fs after initial boot),
so no matter how many times we try to load the unit and set
fragment_loadtime / fragment_not_found_time, it is always higher than cache_mtime,
so manager_unit_cache_should_retry_load() always returns true.
2020-08-31 20:53:38 +02:00
Zbigniew Jędrzejewski-Szmek 81be23886d core: rename manager_unit_file_maybe_loadable_from_cache()
The name is misleading, since we aren't really loading the unit from cache — if
this function returns true, we'll try to load the unit from disk, updating the
cache in the process.
2020-08-31 20:53:38 +02:00
10 changed files with 66 additions and 50 deletions

View File

@ -10,8 +10,9 @@ We welcome contributions from everyone. However, please follow the following gui
## Filing Issues ## Filing Issues
* We use [GitHub Issues](https://github.com/systemd/systemd/issues) **exclusively** for tracking **bugs** and **feature** **requests** of systemd. If you are looking for help, please contact [systemd-devel mailing list](https://lists.freedesktop.org/mailman/listinfo/systemd-devel) instead. * We use [GitHub Issues](https://github.com/systemd/systemd/issues) **exclusively** for tracking **bugs** and **feature** **requests** (RFEs) of systemd. If you are looking for help, please contact [systemd-devel mailing list](https://lists.freedesktop.org/mailman/listinfo/systemd-devel) instead.
* We only track bugs in the **two** **most** **recently** **released** (non-rc) **versions** of systemd in the GitHub Issue tracker. If you are using an older version of systemd, please contact your distribution's bug tracker instead (see below). See [GitHub Release Page](https://github.com/systemd/systemd/releases) for the list of most recent releases. * We only track bugs in the **two** **most** **recently** **released** (non-rc) **versions** of systemd in the GitHub Issue tracker. If you are using an older version of systemd, please contact your distribution's bug tracker instead (see below). See [GitHub Release Page](https://github.com/systemd/systemd/releases) for the list of most recent releases.
* When filing a feature request issue (RFE), please always check first if the newest upstream version of systemd already implements the feature, and whether there's already an issue filed for your feature by someone else.
* When filing an issue, specify the **systemd** **version** you are experiencing the issue with. Also, indicate which **distribution** you are using. * When filing an issue, specify the **systemd** **version** you are experiencing the issue with. Also, indicate which **distribution** you are using.
* Please include an explanation how to reproduce the issue you are pointing out. * Please include an explanation how to reproduce the issue you are pointing out.

View File

@ -6,6 +6,8 @@
#include <string.h> #include <string.h>
#include <sys/types.h> #include <sys/types.h>
#include "time-util.h"
struct siphash { struct siphash {
uint64_t v0; uint64_t v0;
uint64_t v1; uint64_t v1;
@ -25,6 +27,10 @@ static inline void siphash24_compress_boolean(bool in, struct siphash *state) {
siphash24_compress(&i, sizeof i, state); siphash24_compress(&i, sizeof i, state);
} }
static inline void siphash24_compress_usec_t(usec_t in, struct siphash *state) {
siphash24_compress(&in, sizeof in, state);
}
static inline void siphash24_compress_string(const char *in, struct siphash *state) { static inline void siphash24_compress_string(const char *in, struct siphash *state) {
if (!in) if (!in)
return; return;

View File

@ -5268,7 +5268,7 @@ int unit_load_fragment(Unit *u) {
/* Possibly rebuild the fragment map to catch new units */ /* Possibly rebuild the fragment map to catch new units */
r = unit_file_build_name_map(&u->manager->lookup_paths, r = unit_file_build_name_map(&u->manager->lookup_paths,
&u->manager->unit_cache_mtime, &u->manager->unit_cache_timestamp_hash,
&u->manager->unit_id_map, &u->manager->unit_id_map,
&u->manager->unit_name_map, &u->manager->unit_name_map,
&u->manager->unit_path_cache); &u->manager->unit_path_cache);

View File

@ -704,7 +704,7 @@ static void manager_free_unit_name_maps(Manager *m) {
m->unit_id_map = hashmap_free(m->unit_id_map); m->unit_id_map = hashmap_free(m->unit_id_map);
m->unit_name_map = hashmap_free(m->unit_name_map); m->unit_name_map = hashmap_free(m->unit_name_map);
m->unit_path_cache = set_free(m->unit_path_cache); m->unit_path_cache = set_free(m->unit_path_cache);
m->unit_cache_mtime = 0; m->unit_cache_timestamp_hash = 0;
} }
static int manager_setup_run_queue(Manager *m) { static int manager_setup_run_queue(Manager *m) {
@ -1947,19 +1947,22 @@ unsigned manager_dispatch_load_queue(Manager *m) {
return n; return n;
} }
bool manager_unit_file_maybe_loadable_from_cache(Unit *u) { bool manager_unit_cache_should_retry_load(Unit *u) {
assert(u); assert(u);
/* Automatic reloading from disk only applies to units which were not found sometime in the past, and
* the not-found stub is kept pinned in the unit graph by dependencies. For units that were
* previously loaded, we don't do automatic reloading, and daemon-reload is necessary to update. */
if (u->load_state != UNIT_NOT_FOUND) if (u->load_state != UNIT_NOT_FOUND)
return false; return false;
if (u->manager->unit_cache_mtime == 0) /* The cache has been updated since the last time we tried to load the unit. There might be new
return false; * fragment paths to read. */
if (u->manager->unit_cache_timestamp_hash != u->fragment_not_found_timestamp_hash)
if (u->manager->unit_cache_mtime > u->fragment_loadtime)
return true; return true;
return !lookup_paths_mtime_good(&u->manager->lookup_paths, u->manager->unit_cache_mtime); /* The cache needs to be updated because there are modifications on disk. */
return !lookup_paths_timestamp_hash_same(&u->manager->lookup_paths, u->manager->unit_cache_timestamp_hash, NULL);
} }
int manager_load_unit_prepare( int manager_load_unit_prepare(
@ -2008,10 +2011,10 @@ int manager_load_unit_prepare(
* first if anything in the usual paths was modified since the last time * first if anything in the usual paths was modified since the last time
* the cache was loaded. Also check if the last time an attempt to load the * the cache was loaded. Also check if the last time an attempt to load the
* unit was made was before the most recent cache refresh, so that we know * unit was made was before the most recent cache refresh, so that we know
* we need to try again - even if the cache is current, it might have been * we need to try again even if the cache is current, it might have been
* updated in a different context before we had a chance to retry loading * updated in a different context before we had a chance to retry loading
* this particular unit. */ * this particular unit. */
if (manager_unit_file_maybe_loadable_from_cache(ret)) if (manager_unit_cache_should_retry_load(ret))
ret->load_state = UNIT_STUB; ret->load_state = UNIT_STUB;
else { else {
*_ret = ret; *_ret = ret;

View File

@ -233,7 +233,7 @@ struct Manager {
Hashmap *unit_id_map; Hashmap *unit_id_map;
Hashmap *unit_name_map; Hashmap *unit_name_map;
Set *unit_path_cache; Set *unit_path_cache;
usec_t unit_cache_mtime; uint64_t unit_cache_timestamp_hash;
char **transient_environment; /* The environment, as determined from config files, kernel cmdline and environment generators */ char **transient_environment; /* The environment, as determined from config files, kernel cmdline and environment generators */
char **client_environment; /* Environment variables created by clients through the bus API */ char **client_environment; /* Environment variables created by clients through the bus API */
@ -464,7 +464,7 @@ Unit *manager_get_unit(Manager *m, const char *name);
int manager_get_job_from_dbus_path(Manager *m, const char *s, Job **_j); int manager_get_job_from_dbus_path(Manager *m, const char *s, Job **_j);
bool manager_unit_file_maybe_loadable_from_cache(Unit *u); bool manager_unit_cache_should_retry_load(Unit *u);
int manager_load_unit_prepare(Manager *m, const char *name, const char *path, sd_bus_error *e, Unit **_ret); int manager_load_unit_prepare(Manager *m, const char *name, const char *path, sd_bus_error *e, Unit **_ret);
int manager_load_unit(Manager *m, const char *name, const char *path, sd_bus_error *e, Unit **_ret); int manager_load_unit(Manager *m, const char *name, const char *path, sd_bus_error *e, Unit **_ret);
int manager_load_startable_unit_or_warn(Manager *m, const char *name, const char *path, Unit **ret); int manager_load_startable_unit_or_warn(Manager *m, const char *name, const char *path, Unit **ret);

View File

@ -960,12 +960,13 @@ int transaction_add_job_and_dependencies(
* first if anything in the usual paths was modified since the last time * first if anything in the usual paths was modified since the last time
* the cache was loaded. Also check if the last time an attempt to load the * the cache was loaded. Also check if the last time an attempt to load the
* unit was made was before the most recent cache refresh, so that we know * unit was made was before the most recent cache refresh, so that we know
* we need to try again - even if the cache is current, it might have been * we need to try again even if the cache is current, it might have been
* updated in a different context before we had a chance to retry loading * updated in a different context before we had a chance to retry loading
* this particular unit. * this particular unit.
*
* Given building up the transaction is a synchronous operation, attempt * Given building up the transaction is a synchronous operation, attempt
* to load the unit immediately. */ * to load the unit immediately. */
if (r < 0 && manager_unit_file_maybe_loadable_from_cache(unit)) { if (r < 0 && manager_unit_cache_should_retry_load(unit)) {
sd_bus_error_free(e); sd_bus_error_free(e);
unit->load_state = UNIT_STUB; unit->load_state = UNIT_STUB;
r = unit_load(unit); r = unit_load(unit);

View File

@ -1671,18 +1671,18 @@ int unit_load(Unit *u) {
return 0; return 0;
fail: fail:
/* We convert ENOEXEC errors to the UNIT_BAD_SETTING load state here. Configuration parsing code should hence /* We convert ENOEXEC errors to the UNIT_BAD_SETTING load state here. Configuration parsing code
* return ENOEXEC to ensure units are placed in this state after loading */ * should hence return ENOEXEC to ensure units are placed in this state after loading. */
u->load_state = u->load_state == UNIT_STUB ? UNIT_NOT_FOUND : u->load_state = u->load_state == UNIT_STUB ? UNIT_NOT_FOUND :
r == -ENOEXEC ? UNIT_BAD_SETTING : r == -ENOEXEC ? UNIT_BAD_SETTING :
UNIT_ERROR; UNIT_ERROR;
u->load_error = r; u->load_error = r;
/* Record the last time we tried to load the unit, so that if the cache gets updated between now /* Record the timestamp on the cache, so that if the cache gets updated between now and the next time
* and the next time an attempt is made to load this unit, we know we need to check again */ * an attempt is made to load this unit, we know we need to check again. */
if (u->load_state == UNIT_NOT_FOUND) if (u->load_state == UNIT_NOT_FOUND)
u->fragment_loadtime = now(CLOCK_REALTIME); u->fragment_not_found_timestamp_hash = u->manager->unit_cache_timestamp_hash;
unit_add_to_dbus_queue(u); unit_add_to_dbus_queue(u);
unit_add_to_gc_queue(u); unit_add_to_gc_queue(u);

View File

@ -136,7 +136,7 @@ typedef struct Unit {
char *source_path; /* if converted, the source file */ char *source_path; /* if converted, the source file */
char **dropin_paths; char **dropin_paths;
usec_t fragment_loadtime; usec_t fragment_not_found_timestamp_hash;
usec_t fragment_mtime; usec_t fragment_mtime;
usec_t source_mtime; usec_t source_mtime;
usec_t dropin_mtime; usec_t dropin_mtime;

View File

@ -1,5 +1,7 @@
/* SPDX-License-Identifier: LGPL-2.1+ */ /* SPDX-License-Identifier: LGPL-2.1+ */
#include "sd-id128.h"
#include "dirent-util.h" #include "dirent-util.h"
#include "fd-util.h" #include "fd-util.h"
#include "fs-util.h" #include "fs-util.h"
@ -199,9 +201,14 @@ static bool lookup_paths_mtime_exclude(const LookupPaths *lp, const char *path)
streq_ptr(path, lp->runtime_control); streq_ptr(path, lp->runtime_control);
} }
bool lookup_paths_mtime_good(const LookupPaths *lp, usec_t mtime) { #define HASH_KEY SD_ID128_MAKE(4e,86,1b,e3,39,b3,40,46,98,5d,b8,11,34,8f,c3,c1)
char **dir;
bool lookup_paths_timestamp_hash_same(const LookupPaths *lp, uint64_t timestamp_hash, uint64_t *ret_new) {
struct siphash state;
siphash24_init(&state, HASH_KEY.bytes);
char **dir;
STRV_FOREACH(dir, (char**) lp->search_path) { STRV_FOREACH(dir, (char**) lp->search_path) {
struct stat st; struct stat st;
@ -217,18 +224,20 @@ bool lookup_paths_mtime_good(const LookupPaths *lp, usec_t mtime) {
continue; continue;
} }
if (timespec_load(&st.st_mtim) > mtime) { siphash24_compress_usec_t(timespec_load(&st.st_mtim), &state);
log_debug_errno(errno, "Unit dir %s has changed, need to update cache.", *dir);
return false;
}
} }
return true; uint64_t updated = siphash24_finalize(&state);
if (ret_new)
*ret_new = updated;
if (updated != timestamp_hash)
log_debug("Modification times have changed, need to update cache.");
return updated == timestamp_hash;
} }
int unit_file_build_name_map( int unit_file_build_name_map(
const LookupPaths *lp, const LookupPaths *lp,
usec_t *cache_mtime, uint64_t *cache_timestamp_hash,
Hashmap **unit_ids_map, Hashmap **unit_ids_map,
Hashmap **unit_names_map, Hashmap **unit_names_map,
Set **path_cache) { Set **path_cache) {
@ -245,15 +254,19 @@ int unit_file_build_name_map(
_cleanup_hashmap_free_ Hashmap *ids = NULL, *names = NULL; _cleanup_hashmap_free_ Hashmap *ids = NULL, *names = NULL;
_cleanup_set_free_free_ Set *paths = NULL; _cleanup_set_free_free_ Set *paths = NULL;
uint64_t timestamp_hash;
char **dir; char **dir;
int r; int r;
usec_t mtime = 0;
/* Before doing anything, check if the mtime that was passed is still valid. If /* Before doing anything, check if the timestamp hash that was passed is still valid.
* yes, do nothing. If *cache_time == 0, always build the cache. */ * If yes, do nothing. */
if (cache_mtime && *cache_mtime > 0 && lookup_paths_mtime_good(lp, *cache_mtime)) if (cache_timestamp_hash &&
lookup_paths_timestamp_hash_same(lp, *cache_timestamp_hash, &timestamp_hash))
return 0; return 0;
/* The timestamp hash is now set based on the mtimes from before when we start reading files.
* If anything is modified concurrently, we'll consider the cache outdated. */
if (path_cache) { if (path_cache) {
paths = set_new(&path_hash_ops_free); paths = set_new(&path_hash_ops_free);
if (!paths) if (!paths)
@ -263,7 +276,6 @@ int unit_file_build_name_map(
STRV_FOREACH(dir, (char**) lp->search_path) { STRV_FOREACH(dir, (char**) lp->search_path) {
struct dirent *de; struct dirent *de;
_cleanup_closedir_ DIR *d = NULL; _cleanup_closedir_ DIR *d = NULL;
struct stat st;
d = opendir(*dir); d = opendir(*dir);
if (!d) { if (!d) {
@ -272,13 +284,6 @@ int unit_file_build_name_map(
continue; continue;
} }
/* Determine the latest lookup path modification time */
if (fstat(dirfd(d), &st) < 0)
return log_error_errno(errno, "Failed to fstat %s: %m", *dir);
if (!lookup_paths_mtime_exclude(lp, *dir))
mtime = MAX(mtime, timespec_load(&st.st_mtim));
FOREACH_DIRENT_ALL(de, d, log_warning_errno(errno, "Failed to read \"%s\", ignoring: %m", *dir)) { FOREACH_DIRENT_ALL(de, d, log_warning_errno(errno, "Failed to read \"%s\", ignoring: %m", *dir)) {
char *filename; char *filename;
_cleanup_free_ char *_filename_free = NULL, *simplified = NULL; _cleanup_free_ char *_filename_free = NULL, *simplified = NULL;
@ -417,8 +422,8 @@ int unit_file_build_name_map(
basename(dst), src); basename(dst), src);
} }
if (cache_mtime) if (cache_timestamp_hash)
*cache_mtime = mtime; *cache_timestamp_hash = timestamp_hash;
hashmap_free_and_replace(*unit_ids_map, ids); hashmap_free_and_replace(*unit_ids_map, ids);
hashmap_free_and_replace(*unit_names_map, names); hashmap_free_and_replace(*unit_names_map, names);

View File

@ -43,19 +43,19 @@ bool unit_type_may_template(UnitType type) _const_;
int unit_symlink_name_compatible(const char *symlink, const char *target, bool instance_propagation); int unit_symlink_name_compatible(const char *symlink, const char *target, bool instance_propagation);
int unit_validate_alias_symlink_and_warn(const char *filename, const char *target); int unit_validate_alias_symlink_and_warn(const char *filename, const char *target);
bool lookup_paths_mtime_good(const LookupPaths *lp, usec_t mtime); bool lookup_paths_timestamp_hash_same(const LookupPaths *lp, uint64_t timestamp_hash, uint64_t *ret_new);
int unit_file_build_name_map( int unit_file_build_name_map(
const LookupPaths *lp, const LookupPaths *lp,
usec_t *ret_time, uint64_t *cache_timestamp_hash,
Hashmap **ret_unit_ids_map, Hashmap **unit_ids_map,
Hashmap **ret_unit_names_map, Hashmap **unit_names_map,
Set **ret_path_cache); Set **path_cache);
int unit_file_find_fragment( int unit_file_find_fragment(
Hashmap *unit_ids_map, Hashmap *unit_ids_map,
Hashmap *unit_name_map, Hashmap *unit_name_map,
const char *unit_name, const char *unit_name,
const char **ret_fragment_path, const char **ret_fragment_path,
Set **names); Set **ret_names);
const char* runlevel_to_target(const char *rl); const char* runlevel_to_target(const char *rl);