Compare commits

...

89 Commits

Author SHA1 Message Date
Alex Feyerke 0ca09e7f34
Merge f41f8eb4a2 into a3c2a9ee5d 2024-09-19 15:28:20 +01:00
Yu Watanabe a3c2a9ee5d
Merge pull request #34486 from DaanDeMeyer/test-process-util
test-process-util: Migrate to new assertion macros
2024-09-19 23:28:15 +09:00
Daan De Meyer 062332f3db
Merge pull request #34481 from yuwata/has-tpm2
tpm2-util: several cleanups for tpm2_support()
2024-09-19 16:22:24 +02:00
Daan De Meyer bc9a9177b2
Merge pull request #34483 from yuwata/network-conf-parser-neighbor-nexthop
network: several cleanups for conf parsers
2024-09-19 13:59:56 +02:00
Daan De Meyer e5c6dcac87 test-process-util: Ignore EINVAL from setresuid() and setresgid()
If we're running in a user namespace with a single user and without
the nobody user, we'll get EINVAL from these system calls so make
sure we handle those gracefully.
2024-09-19 13:42:05 +02:00
Daan De Meyer 34a7ca6db2 test-process-util: Use FORK_REOPEN_LOG everywhere we close all fds
To make sure logging works in the child processes.
2024-09-19 13:42:05 +02:00
Daan De Meyer 397820961d test-process-util: Migrate to new assertion macros 2024-09-19 13:42:03 +02:00
Yu Watanabe 3b16e9f419 man/systemd-analyze: mention required libraries for TPM2 support
Closes #34477.
2024-09-19 19:21:08 +09:00
Yu Watanabe d5a7f3b7d4 tpm2-util: colorize output of 'systemd-analyze has-tpm2' 2024-09-19 19:14:19 +09:00
Yu Watanabe f1c16ca6d6 shell-completion/analyze: add has-tpm2 2024-09-19 19:08:49 +09:00
Yu Watanabe b094398b0f tpm2-util: update comment
has-tpm2 command is moved to systemd-analyze.

Follow-up for 58e359604f.
2024-09-19 19:08:10 +09:00
Yu Watanabe 1ee6570843 tpm2-util: do not load tpm2 libraries when not interested in the existence of the libraries
For example, 'bootctl status' only interested in if the efi has TPM2
support and a TPM2 driver is loaded. Hence, not necessary to load
libtss2.
2024-09-19 19:06:46 +09:00
Yu Watanabe b7f051c91d tpm2-util: introduce tpm2_is_fully_supported() 2024-09-19 19:04:15 +09:00
Yu Watanabe a13ead6814
Merge pull request #34479 from yuwata/sd-json-dispatch-field-table-static
tree-wide: make sd_json_dispatch_field table static
2024-09-19 18:59:17 +09:00
Yu Watanabe f901a7b39f network/nexthop: introduce generic conf parser for [NextHop] section 2024-09-19 18:41:47 +09:00
Yu Watanabe 9b01cf0406 network/nexthop: make conf parsers for Family= and Gateway= independent of each other 2024-09-19 18:41:46 +09:00
Yu Watanabe d5aae0713d network/nexthop: use log_section_warning() and friend 2024-09-19 18:40:38 +09:00
Daan De Meyer 1d8a81eb4e Add ASSERT_OK_ZERO_ERRNO() and ASSERT_OK_EQ_ERRNO() 2024-09-19 11:38:47 +02:00
Daan De Meyer 86c1317270
Merge pull request #34474 from DaanDeMeyer/user-group
Two integration test fixes
2024-09-19 09:20:03 +02:00
Daan De Meyer f4faac2073 test: Run TEST-74-AUX-UTILS in virtual machine
Various tests skip themselves when running in a container so make
sure the test runs in a virtual machine so we get full coverage.
2024-09-19 14:56:34 +09:00
Yu Watanabe 2bcc2a89f3 test: create .netdev file at last
Previously, when the test ran on mkosi, then networkd was not masked, and
might be already started. In that case, the interface test2 would be created
soon after the .netdev file is created, and the .link file would not be
applied to the interface. Hence, the later test case for
'networkctl cat @test2:link' would fail.

This make networkd always started at the beginning of the test, and
.netdev file created after .link file is created. So, .link file is
always applied to the interface created by the .netdev file.
2024-09-19 14:50:10 +09:00
Yu Watanabe 07e6a111c0 man: fix typo
Follow-up for 8aee931e7a.
2024-09-19 09:18:47 +09:00
Daan De Meyer 1d5b4317cd ci: Don't add testuser to wheel and systemd-journal groups
This breaks TEST-74-AUX-UTILS when run in a VM as the user gets access
to journal files that the test expects it can't access.
2024-09-19 08:47:53 +09:00
Ayham Kteash f41f8eb4a2
man: add conversion for tables 2024-09-18 22:37:45 +02:00
Yu Watanabe 8d6eedd8a3 network/neighbor: use log_section_warning_errno() 2024-09-19 04:03:11 +09:00
Yu Watanabe 91eaa90b81 network/neighbor: introduce generic Neighbor section parser 2024-09-19 03:59:34 +09:00
Yu Watanabe 3b5c5da73a network/neighbor: use struct in_addr_data 2024-09-19 03:58:28 +09:00
Yu Watanabe 1775654e2c conf-parser: drop unnecessary temporary variable 2024-09-19 03:39:15 +09:00
Yu Watanabe 0ea6d55a4b conf-parser: introduce config_parse_in_addr_data() 2024-09-19 03:38:22 +09:00
Yu Watanabe 26d35019de tree-wide: drop unnecessary 'struct' 2024-09-19 01:34:57 +09:00
Yu Watanabe b962338104 nsresource: make sd_json_dispatch_field table static
This also adds missing error check of sd_json_dispatch().

Follow-up for 54452c7b2a.
2024-09-19 01:34:57 +09:00
Yu Watanabe fae0b00434 creds-util: make sd_json_dispatch_field table static 2024-09-19 01:34:57 +09:00
Yu Watanabe f7923ef318 resolve: make sd_json_dispatch_field table static 2024-09-19 01:34:57 +09:00
Yu Watanabe 36df48d863 resolvectl: make sd_json_dispatch_field table static 2024-09-19 01:34:57 +09:00
Yu Watanabe 53c638db16 updatectl: make sd_json_dispatch_field table static
This also fixes memory leak of Version object on failure.

Follow-up for ec15bb71c2.
2024-09-19 01:34:57 +09:00
Yu Watanabe 751a247794 varlinkctl: make sd_json_dispatch_field table static 2024-09-19 01:34:56 +09:00
Yu Watanabe 07dbbda0fc ssh-generator: make sd_json_dispatch_field table static 2024-09-19 01:34:56 +09:00
Yu Watanabe ed4a6c476e machine: make sd_json_dispatch_field table static 2024-09-19 01:34:56 +09:00
Julia Krüger f4429e9753
man: add info on includes to conversion README 2024-09-18 16:16:41 +02:00
Julia Krüger c45728e055
man: add sd_journal_get_data 2024-09-18 16:02:18 +02:00
Julia Krüger 6375d2e7d1
man: make synopsis text not a headline 2024-09-18 16:00:22 +02:00
Julia Krüger 64af8209cf
man: add funcprototype functions 2024-09-18 15:09:53 +02:00
Ayham Kteash cd66e8df6e
man : fix conversion script to produce correct errors 2024-09-18 10:42:24 +02:00
Ayham Kteash cdfdb3146a
man: add sphinx extension to generate index 2024-09-17 15:41:43 +02:00
Ayham Kteash f6b11545b4
man : move files to folders 2024-09-17 15:41:43 +02:00
Ayham Kteash 642b173126
man : update script to use includes folder 2024-09-17 15:41:42 +02:00
Julia Krüger 4f3ed99288
man: threads aware include file 2024-09-17 15:37:53 +02:00
Julia Krüger ef102f6dca
man: fix para with id 2024-09-17 15:36:54 +02:00
Julia Krüger cda0f2b826
man: include files as variable 2024-09-17 15:36:19 +02:00
Ayham Kteash 4b4bfa8da0
man : update code examples include imports 2024-09-17 14:22:06 +02:00
Ayham Kteash 35811a826a
man : fix output dir 2024-09-17 14:21:39 +02:00
Ayham Kteash 8daae1a0a3
man : fix some typos and apply patch 2024-09-17 13:12:55 +02:00
Zbigniew Jędrzejewski-Szmek fbedde59e1
man: fix title level for examples
I'm not sure if 4 is the appropriate value in all cases, but at least
for the two files in that are in the test list, it works better.

Also, we need to remove newlines from the title to let it render correctly.

Signed-off-by: Ayham Kteash <ayham@thehoodiefirm.com>
2024-09-17 13:12:55 +02:00
Julia Krüger 23da42e0b8
docs: literal format as quotes 2024-09-17 12:11:05 +02:00
Ayham Kteash 35642273bf
man: update main script and readme for xml to rst conversion 2024-09-16 17:29:34 +02:00
Julia Krüger 37e6c6ce5a
man: add spdx header to generated files 2024-09-11 11:08:04 +02:00
Ayham Kteash c657e151c6
man : add shpinx extension to handle external man pages links 2024-09-09 16:14:15 +02:00
Ayham Kteash 7b2a7fe37b
man: update generated files 2024-09-04 13:05:10 +02:00
Ayham Kteash ea8ce89328
man: update directive extension to load data from conf.py 2024-09-04 13:05:09 +02:00
Ayham Kteash 370c2e9860
man: fix conversion script to render strings correctly 2024-09-04 13:05:09 +02:00
Ayham Kteash eca201a493
man: add custom definitions for vars, const, options 2024-09-04 13:05:09 +02:00
Julia Krüger 94bd1b9778
docs: add SPDX headers to static files 2024-09-02 17:10:55 +02:00
Julia Krüger 966911c4eb
docs: update heading 2024-09-02 17:04:41 +02:00
Ayham Kteash 0cacca8bb8
man: add directive list sphinx extension 2024-09-02 15:26:55 +02:00
Julia Krüger cbf9e8d5c0
chore: no empty leading or trailing lines 2024-08-28 14:50:50 +02:00
Julia Krüger 362cd65f9a
chore: version substitutions shortcut 2024-08-28 14:40:37 +02:00
Julia Krüger 0b0a3c3024
chore: remove space after redirection operator 2024-08-28 14:26:18 +02:00
Julia Krüger 85a0781899
chore: remove .bat file 2024-08-27 17:16:55 +02:00
Alex Feyerke f8e34dee5c
docs(doc-update): add info on man pages and more todos 2024-07-09 14:53:34 +02:00
Alex Feyerke 5c6c49c76e
chore(doc-migration): add/update rst files for current parser state 2024-07-09 14:43:52 +02:00
Alex Feyerke a451f7e6e2
fix(doc-migration): minor html styling improvements 2024-07-09 14:42:49 +02:00
Alex Feyerke c1bfd2fd62
fix(doc-migration): sort out indentation and header levels so they work for both html and man outputs 2024-07-09 14:42:21 +02:00
Alex Feyerke 5c4157de82
feat(doc-migration): specify some man pages to generate 2024-07-09 14:41:30 +02:00
Alex Feyerke 2c3d7bb7cd
chore: gitignore .venv 2024-07-09 14:24:56 +02:00
Alex Feyerke 6997c5805d
feat(doc-migration): only show versionadded info in html 2024-07-09 09:34:55 +02:00
Ayham Kteash b55ea0c148
man: change convert script to use new python file 2024-07-04 16:12:18 +02:00
Ayham Kteash 7280734328
man: update script to handle all files in folders and log errors to json file 2024-07-04 16:12:18 +02:00
Julia Krüger e23dedaf23
man: make meta tags a comment 2024-07-04 16:02:41 +02:00
Julia Krüger 67d40dd144
chore: increase conflict marker size for .rst 2024-07-04 11:04:36 +02:00
Julia Krüger 0704f7bd14
man: move includes into source folder 2024-07-04 11:03:21 +02:00
Julia Krüger f3b6384422
man: common-variables fix 2024-07-03 17:33:37 +02:00
Julia Krüger 036946fdd6
man: automatic inclusion-markers from ids 2024-07-03 17:32:02 +02:00
Julia Krüger b0524359ff
man: add marker to common-variables 2024-07-03 15:13:11 +02:00
Julia Krüger 4e9f71d008
man: handle xpointer includes before rest 2024-07-03 15:13:10 +02:00
Ayham Kteash 59537abf19
man: convert meta data to rst as well 2024-07-03 14:46:05 +02:00
Julia Krüger abd687e17d
doc: update included files 2024-07-03 14:45:25 +02:00
Julia Krüger 3ee3683afe
docs: add version 257 2024-07-03 14:43:12 +02:00
Ayham Kteash ee9ba6569f
man: update migration script to handle code blocks and code includes 2024-07-03 11:56:36 +02:00
Ayham Kteash 0cc5423739
man: add migration script and initial setup 2024-07-03 11:25:34 +02:00
90 changed files with 9805 additions and 799 deletions

1
.gitattributes vendored
View File

@ -2,6 +2,7 @@
*.gpg binary generated
*.bmp binary
*.base64 generated
*.rst conflict-marker-size=100
# Mark files as "generated", i.e. no license applies to them.
# This includes output from programs, directive lists generated by grepping

3
.gitignore vendored
View File

@ -39,3 +39,6 @@ mkosi.local.conf
.dir-locals-2.el
.vscode/
/pkg/
/doc-migration/.venv
/doc-migration/build
.venv

20
doc-migration/Makefile Normal file
View File

@ -0,0 +1,20 @@
# Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?=
SPHINXBUILD ?= sphinx-build
SOURCEDIR = source
BUILDDIR = build
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
.PHONY: help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

172
doc-migration/README.md Normal file
View File

@ -0,0 +1,172 @@
# Migration of Documentation from Docbook to Sphinx
- [Migration of Documentation from Docbook to Sphinx](#migration-of-documentation-from-docbook-to-sphinx)
- [Prerequisites](#prerequisites)
- [Transformation Process](#transformation-process)
- [1. Docbook to `rst`](#1-docbook-to-rst)
- [2. `rst` to Sphinx](#2-rst-to-sphinx)
- [Sphinx Extensions](#sphinx-extensions)
- [sphinxcontrib-globalsubs](#sphinxcontrib-globalsubs)
- [Custom Sphinx Extensions](#custom-sphinx-extensions)
- [directive_roles.py (90% done)](#directive_rolespy-90-done)
- [external_man_links.py](#external_man_linkspy)
- [Includes](#includes)
- [Todo:](#todo)
## Prerequisites
Python dependencies for parsing docbook files and generating `rst`:
- `lxml`
Python dependencies for generating `html` and `man` pages from `rst`:
- `sphinx`
- `sphinxcontrib-globalsubs`
- `furo` (The Sphinx theme)
To install these (see [Sphinx Docs](https://www.sphinx-doc.org/en/master/tutorial/getting-started.html#setting-up-your-project-and-development-environment)):
```sh
# Generate a Python env:
$ python3 -m venv .venv
$ source .venv/bin/activate
# Install deps
$ python3 -m pip install -U lxml
$ python3 -m pip install -U sphinx
$ python3 -m pip install -U sphinxcontrib-globalsubs
$ python3 -m pip install -U furo
$ cd doc-migration && ./convert.sh
```
## Transformation Process
You can run the entire process with `./convert.sh` in the `doc-migration` folder. The individual steps are:
### 1. Docbook to `rst`
Use the `main.py` script to convert a single Docbook file to `rst`:
```sh
# in the `doc-migration` folder:
$ python3 main.py --file ../man/busctl.xml --output 'in-progress'
```
This file calls `db2rst.py` that parses Docbook elements on each file, does some string transformation to the contents of each, and glues them all back together again. It will also output info on unhandled elements, so we know whether our converter is feature complete and can achieve parity with the old docs.
To run the script against all files you can use :
```sh
# in the `doc-migration` folder:
$ python3 main.py --dir ../man --output 'in-progress'
```
> When using the script to convert all files at once in our man folder we recommend using "in-progress" folder name as our output dir so we don't end up replacing some the files that were converted and been marked as finished inside the source folder.
After using the above script at least once you will get two files(`errors.json`,`successes_with_unhandled_tags.json`) in the output dir.
`errors.json` will have all the files that failed to convert to rst with the respective error message for each file.
running : `python3 main.py --errored` will only process the files that had an error and present in `errors.json`
`successes_with_unhandled_tags.json` will have all the files that were converted but there were still some tags that are not defined in `db2rst.py` yet.
running : `python3 main.py --unhandled-only` will only process the files that are present in `successes_with_unhandled_tags.json`
This is to avoid running all files at once when we only need to work on files that are not completely processed.
### 2. `rst` to Sphinx
```sh
# in the `/doc-migration` folder
$ rm -rf build
# ☝️ if you already have a build
$ make html man
```
- The `html` files end up in `/doc-migration/build/html`. Open the `index.html` there to browse the docs.
- The `man` files end up in `/doc-migration/build/man`. Preview an individual file with `$ mandoc -l build/man/busctl.1`
#### Sphinx Extensions
We use the following Sphinx extensions to achieve parity with the old docs:
##### sphinxcontrib-globalsubs
Allows referencing variables in the `global_substitutions` object in `/doc-migrations/source/conf.py` (the Sphinx config file).
#### Custom Sphinx Extensions
##### directive_roles.py (90% done)
This is used to add custom Sphinx directives and roles to generate systemD directive lists page.
To achieve the functionality exiting in `tools/make-directive-index.py` by building the Directive Index page from custom Sphinx role, here is an example:
The formula for those sphinx roles is like this: `:directive:{directive_id}:{type}`
For example we can use an inline Sphinx role like this:
```
:directive:environment-variables:var:`SYSEXT_SCOPE=`
```
This will be then inserted in the SystemD directive page on build under the group `environment-variables`
we can use the `{type}` to have more control over how this will be treated inside the Directive Index page.
##### external_man_links.py
This is used to create custom sphinx roles to handle external links for man pages to avoid having full urls in rst for example:
`:die-net:`refentrytitle(manvolnum)` will lead to 'http://linux.die.net/man/{manvolnum}/{refentrytitle}'
a full list of these roles can be found in [external_man_links](source/_ext/external_man_links.py).
#### Includes
1. Versions
In the Docbook files you may find lines like these: `<xi:include href="version-info.xml" xpointer="v205"/>` which would render into `Added in version 205` in the docs. This is now archived with the existing [sphinx directive ".. versionadded::"](https://www.sphinx-doc.org/en/master/usage/restructuredtext/directives.html#directive-versionadded) and represented as `.. versionadded:: 205` in the rst file
2. Code Snippets
These can be included with the [literalinclude directive](https://www.sphinx-doc.org/en/master/usage/restructuredtext/directives.html#directive-literalinclude) when living in their own file.
Example:
```rst
.. literalinclude:: ./check-os-release-simple.py
:language: python
```
There is also the option to include a [code block](https://www.sphinx-doc.org/en/master/usage/restructuredtext/directives.html#directive-code-block) directly in the rst file.
Example:
```rst
.. code-block:: sh
a{sv} 3 One s Eins Two u 2 Yes b true
```
3. Text Snippets
There are a few xml files were sections of these files are reused in multiple other files. While it is no problem to include a whole other rst file the concept of only including a part of that file is a bit more tricky. You can choose to include text partial that starts after a specific text and also to stop before reaching another text. So we decided it would be best to add start and stop markers to define the section in these source files. These markers are: `.. inclusion-marker-do-not-remove` / ``So that a`<xi:include href="standard-options.xml" xpointer="no-pager" />` turns into:
```rst
.. include:: ./standard-options.rst
:start-after: .. inclusion-marker-do-not-remove no-pager
:end-before: .. inclusion-end-marker-do-not-remove no-pager
```
## Todo
An incomplete list.
- [ ] Custom Link transformations:
- [ ] `custom-man.xsl`
- [x] `custom-html.xsl`
- [ ] See whether `tools/tools/xml_helper.py` does anything we dont do, this also contains useful code for:
- [ ] Build a man index, as in `tools/make-man-index.py`
- [x] Build a directives index, as in `tools/make-directive-index.py`
- [ ] DBUS doc generation `tools/update-dbus-docs.py`
- [ ] See whether `tools/update-man-rules.py` does anything we dont do
- [ ] Make sure the `man_pages` we generate for Sphinxs `conf.py` match the Meson rules in `man/rules/meson.build`
- [ ] Re-implement check-api-docs

26
doc-migration/convert.sh Executable file
View File

@ -0,0 +1,26 @@
#!/bin/bash
# SPDX-License-Identifier: LGPL-2.1-or-later
# Array of XML filenames
files=("sd_journal_get_data" "busctl" "systemd" "journalctl" "os-release")
# Directory paths
input_dir="../man"
output_dir="source/docs"
echo "---------------------"
echo "Converting xml to rst"
echo ""
# Iterate over the filenames
for file in "${files[@]}"; do
echo "------------------"
python3 main.py --dir ${input_dir} --output ${output_dir} --file "${file}.xml"
done
# Clean and build
rm -rf build
echo "--------------------"
echo "Building Sphinx Docs"
echo "--------------------"
make html

830
doc-migration/db2rst.py Normal file
View File

@ -0,0 +1,830 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# SPDX-License-Identifier: LGPL-2.1-or-later
"""
DocBook to ReST converter
=========================
This script may not work out of the box, but is easy to extend.
If you extend it, please send me a patch: wojdyr at gmail.
Docbook has >400 elements, most of them are not supported (yet).
``pydoc db2rst`` shows the list of supported elements.
In reST, inline markup can not be nested (major deficiency of reST).
Since it is not clear what to do with, say,
<subscript><emphasis>x</emphasis></subscript>
the script outputs incorrect (nested) reST (:sub:`*x*`)
and it is up to user to decide how to change it.
Usage: db2rst.py file.xml > file.rst
Ported to Python3 in 2024 by neighbourhood.ie
:copyright: 2009 by Marcin Wojdyr.
:license: BSD.
"""
# If this option is True, XML comment are discarded. Otherwise, they are
# converted to ReST comments.
# Note that ReST doesn't support inline comments. XML comments
# are converted to ReST comment blocks, what may break paragraphs.
from source import conf
import lxml.etree as ET
import re
import sys
import os
from pathlib import Path
REMOVE_COMMENTS = False
# id attributes of DocBook elements are translated to ReST labels.
# If this option is False, only labels that are used in links are generated.
WRITE_UNUSED_LABELS = False
# The Files have sections that are used as includes in other files
FILES_USED_FOR_INCLUDES = ['sd_journal_get_data.xml', 'standard-options.xml',
'user-system-options.xml', 'common-variables.xml', 'standard-conf.xml',
'libsystemd-pkgconfig.xml', 'threads-aware.xml']
# to avoid dupliate error reports
_not_handled_tags = set()
# to remember which id/labels are really needed
_linked_ids = set()
# buffer that is flushed after the end of paragraph,
# used for ReST substitutions
_buffer = ""
_indent_next_listItem_by = 0
def _run(input_file, output_dir):
sys.stderr.write("Parsing XML file `%s'...\n" % input_file)
parser = ET.XMLParser(remove_comments=REMOVE_COMMENTS, no_network=False)
tree = ET.parse(input_file, parser=parser)
for elem in tree.iter():
if elem.tag in ("xref", "link"):
_linked_ids.add(elem.get("linkend"))
output_file = os.path.join(output_dir, os.path.basename(
input_file).replace('.xml', '.rst'))
with open(output_file, 'w') as file:
file.write(TreeRoot(tree.getroot()).encode('utf-8').decode('utf-8'))
def _warn(s):
sys.stderr.write("WARNING: %s\n" % s)
def _supports_only(el, tags):
"print warning if there are unexpected children"
for i in el:
if i.tag not in tags:
_warn("%s/%s skipped." % (el.tag, i.tag))
def _what(el):
"returns string describing the element, such as <para> or Comment"
if isinstance(el.tag, str):
return "<%s>" % el.tag
elif isinstance(el, ET._Comment):
return "Comment"
else:
return str(el)
def _has_only_text(el):
"print warning if there are any children"
if list(el):
_warn("children of %s are skipped: %s" % (_get_path(el),
", ".join(_what(i) for i in el)))
def _has_no_text(el):
"print warning if there is any non-blank text"
if el.text is not None and not el.text.isspace():
_warn("skipping text of <%s>: %s" % (_get_path(el), el.text))
for i in el:
if i.tail is not None and not i.tail.isspace():
_warn("skipping tail of <%s>: %s" % (_get_path(i), i.tail))
def _includes(el):
file_path_pathlib = Path(el.get('href'))
file_extension = file_path_pathlib.suffix
include_files = FILES_USED_FOR_INCLUDES
if file_extension == '.xml':
if el.get('href') == 'version-info.xml':
versionString = conf.global_substitutions.get(
el.get("xpointer"))
# `\n\n \n\n ` forces a newline and subsequent indent.
# The empty spaces are stripped later
return f".. only:: html\n\n \n\n .. versionadded:: {versionString}\n\n "
elif not el.get("xpointer"):
return f".. include:: ../includes/{el.get('href').replace('xml', 'rst')}"
elif el.get('href') in include_files:
return f""".. include:: ../includes/{el.get('href').replace('xml', 'rst')}
:start-after: .. inclusion-marker-do-not-remove {el.get("xpointer")}
:end-before: .. inclusion-end-marker-do-not-remove {el.get("xpointer")}
"""
elif file_extension == '.c':
return f""".. literalinclude:: /code-examples/c/{el.get('href')}
:language: c
"""
elif file_extension == '.py':
return f""".. literalinclude:: /code-examples/py/{el.get('href')}
:language: python
"""
elif file_extension == '.sh':
return f""".. literalinclude:: /code-examples/sh/{el.get('href')}
:language: shell
"""
def _conv(el):
"element to string conversion; usually calls element_name() to do the job"
if el.tag in globals():
s = globals()[el.tag](el)
assert s, "Error: %s -> None\n" % _get_path(el)
return s
elif isinstance(el, ET._Comment):
return Comment(el) if (el.text and not el.text.isspace()) else ""
else:
if el.tag not in _not_handled_tags:
# Convert version references to `versionAdded` directives
if el.tag == "{http://www.w3.org/2001/XInclude}include":
return _includes(el)
else:
_warn("Don't know how to handle <%s>" % el.tag)
_warn(" ... from path: %s" % _get_path(el))
_not_handled_tags.add(el.tag)
return _concat(el)
def _no_special_markup(el):
return _concat(el)
def _remove_indent_and_escape(s, tag):
if tag == "programlisting":
return s
"remove indentation from the string s, escape some of the special chars"
s = "\n".join(i.lstrip().replace("\\", "\\\\") for i in s.splitlines())
# escape inline mark-up start-string characters (even if there is no
# end-string, docutils show warning if the start-string is not escaped)
# TODO: handle also Unicode: « ¡ ¿ as preceding chars
s = re.sub(r"([\s'\"([{</:-])" # start-string is preceded by one of these
r"([|*`[])" # the start-string
r"(\S)", # start-string is followed by non-whitespace
r"\1\\\2\3", # insert backslash
s)
return s
def _concat(el):
"concatate .text with children (_conv'ed to text) and their tails"
s = ""
id = el.get("id")
if id is not None and (WRITE_UNUSED_LABELS or id in _linked_ids):
s += "\n\n.. _%s:\n\n" % id
if el.text is not None:
s += _remove_indent_and_escape(el.text, el.tag)
for i in el:
s += _conv(i)
if i.tail is not None:
if len(s) > 0 and not s[-1].isspace() and i.tail[0] in " \t":
s += i.tail[0]
s += _remove_indent_and_escape(i.tail, el.tag)
return s.strip()
def _original_xml(el):
return ET.tostring(el, with_tail=False).decode('utf-8')
def _no_markup(el):
s = ET.tostring(el, with_tail=False).decode('utf-8')
s = re.sub(r"<.+?>", " ", s) # remove tags
s = re.sub(r"\s+", " ", s) # replace all blanks with single space
return s
def _get_level(el):
"return number of ancestors"
return sum(1 for i in el.iterancestors())
def _get_path(el):
t = [el] + list(el.iterancestors())
return "/".join(str(i.tag) for i in reversed(t))
def _make_title(t, level, indentLevel=0):
t = t.replace('\n', ' ').strip()
if level == 1:
return "\n\n" + "=" * len(t) + "\n" + t + "\n" + "=" * len(t)
char = ["#", "=", "-", "~", "^", "."]
underline = char[level-2] * len(t)
indentation = " "*indentLevel
return f"\n\n{indentation}{t}\n{indentation}{underline}"
def _join_children(el, sep):
_has_no_text(el)
return sep.join(_conv(i) for i in el)
def _block_separated_with_blank_line(el):
s = ""
id = el.get("id")
if id is not None:
s += "\n\n.. inclusion-marker-do-not-remove %s\n\n" % id
s += "\n\n" + _concat(el)
if id is not None:
s += "\n\n.. inclusion-end-marker-do-not-remove %s\n\n" % id
return s
def _indent(el, indent, first_line=None, suppress_blank_line=False):
"returns indented block with exactly one blank line at the beginning"
start = "\n\n"
if suppress_blank_line:
start = ""
# lines = [" "*indent + i for i in _concat(el).splitlines()
# if i and not i.isspace()]
# TODO: This variant above strips empty lines within elements. We dont want that to happen, at least not always
lines = [" "*indent + i for i in _concat(el).splitlines()
if i]
if first_line is not None:
# replace indentation of the first line with prefix `first_line'
lines[0] = first_line + lines[0][indent:]
return start + "\n".join(lines)
def _normalize_whitespace(s):
return " ".join(s.split())
################### DocBook elements #####################
# special "elements"
def TreeRoot(el):
output = _conv(el)
# add .. SPDX-License-Identifier: LGPL-2.1-or-later:
output = '\n\n'.join(
['.. SPDX-License-Identifier: LGPL-2.1-or-later:', output])
# remove trailing whitespace
output = re.sub(r"[ \t]+\n", "\n", output)
# leave only one blank line
output = re.sub(r"\n{3,}", "\n\n", output)
return output
def Comment(el):
return _indent(el, 12, ".. COMMENT: ")
# Meta refs
def refentry(el):
return _concat(el)
# FIXME: how to ignore/delete a tag???
def refentryinfo(el):
# ignore
return ' '
def refnamediv(el):
# return '**Name** \n\n' + _make_title(_join_children(el, ' — '), 2)
return '.. only:: html\n\n' + _make_title(_join_children(el, ''), 2, 3)
def refsynopsisdiv(el):
# return '**Synopsis** \n\n' + _make_title(_join_children(el, ' '), 3)
s = ""
s += _make_title('Synopsis', 2, 3)
s += '\n\n'
s += _join_children(el, ', ')
return s
def refname(el):
_has_only_text(el)
return "%s" % el.text
def refpurpose(el):
_has_only_text(el)
return "%s" % el.text
def cmdsynopsis(el):
return _join_children(el, ' ')
def arg(el):
text = el.text
if text is None:
text = _join_children(el, '')
# choice: req, opt, plain
choice = el.get("choice")
if choice == 'opt':
return f"[%s{'...' if el.get('rep') == 'repeat' else ''}]" % text
elif choice == 'req':
return "{%s}" % text
elif choice == 'plain':
return "%s" % text
else:
"print warning if there another choice"
_warn("skipping arg with choice of: %s" % (choice))
# general inline elements
def emphasis(el):
return "*%s*" % _concat(el).strip()
phrase = emphasis
citetitle = emphasis
acronym = _no_special_markup
def command(el):
# Only enclose in backticks if its not part of a term
# (which is already enclosed in backticks)
isInsideTerm = False
for term in el.iterancestors(tag='term'):
isInsideTerm = True
if isInsideTerm:
return _concat(el).strip()
return "``%s``" % _concat(el).strip()
def literal(el):
return "\"%s\"" % _concat(el).strip()
def varname(el):
isInsideTerm = False
for term in el.iterancestors(tag='term'):
isInsideTerm = True
if isInsideTerm:
return _concat(el).strip()
classname = ''
for varlist in el.iterancestors(tag='variablelist'):
if varlist.attrib.get('class', '') != '':
classname = varlist.attrib['class']
if len(classname) > 0:
return f":directive:{classname}:var:`%s`" % _concat(el).strip()
return "``%s``" % _concat(el).strip()
def option(el):
isInsideTerm = False
for term in el.iterancestors(tag='term'):
isInsideTerm = True
if isInsideTerm:
return _concat(el).strip()
classname = ''
for varlist in el.iterancestors(tag='variablelist'):
if varlist.attrib.get('class', '') != '':
classname = varlist.attrib['class']
if len(classname) > 0:
return f":directive:{classname}:option:`%s`" % _concat(el).strip()
return "``%s``" % _concat(el).strip()
def constant(el):
isInsideTerm = False
for term in el.iterancestors(tag='term'):
isInsideTerm = True
if isInsideTerm:
return _concat(el).strip()
classname = ''
for varlist in el.iterancestors(tag='variablelist'):
if varlist.attrib.get('class', '') != '':
classname = varlist.attrib['class']
if len(classname) > 0:
return f":directive:{classname}:constant:`%s`" % _concat(el).strip()
return "``%s``" % _concat(el).strip()
filename = command
def optional(el):
return "[%s]" % _concat(el).strip()
def replaceable(el):
return "<%s>" % _concat(el).strip()
def term(el):
if el.getparent().index(el) != 0:
return ' '
level = _get_level(el)
if level > 5:
level = 5
# Sometimes, there are multiple terms for one entry. We want those displayed in a single line, so we gather them all up and parse them together
hasMultipleTerms = False
titleStrings = [_concat(el).strip()]
title = ''
for term in el.itersiblings(tag='term'):
# We only arrive here if there is more than one `<term>` in the `el`
hasMultipleTerms = True
titleStrings.append(_concat(term).strip())
if hasMultipleTerms:
title = ', '.join(titleStrings)
# return _make_title(f"``{titleString}``", 4)
else:
title = _concat(el).strip()
if level >= 5:
global _indent_next_listItem_by
_indent_next_listItem_by += 3
return f".. option:: {title}\n\n \n\n "
return _make_title(f"``{title}``", level) + '\n\n'
# links
def ulink(el):
url = el.get("url")
text = _concat(el).strip()
if text.startswith(".. image::"):
return "%s\n :target: %s\n\n" % (text, url)
elif url == text:
return text
elif not text:
return "`<%s>`_" % (url)
else:
return "`%s <%s>`_" % (text, url)
# TODO: other elements are ignored
def xref(el):
_has_no_text(el)
id = el.get("linkend")
return ":ref:`%s`" % id if id in _linked_ids else ":ref:`%s <%s>`" % (id, id)
def link(el):
_has_no_text(el)
return "`%s`_" % el.get("linkend")
# lists
def itemizedlist(el):
return _indent(el, 2, "* ", True)
def orderedlist(el):
return _indent(el, 2, "1. ", True)
def simplelist(el):
type = el.get("type")
if type == "inline":
return _join_children(el, ', ')
else:
return _concat(el)
def member(el):
return _concat(el)
# varlists
def variablelist(el):
return _concat(el)
def varlistentry(el):
s = ""
id = el.get("id")
if id is not None:
s += "\n\n.. inclusion-marker-do-not-remove %s\n\n" % id
for i in el:
if i.tag == 'term':
s += _conv(i) + '\n\n'
else:
# Handle nested list items, this is mainly for
# options that have options
if i.tag == 'listitem':
global _indent_next_listItem_by
s += _indent(i, _indent_next_listItem_by, None, True)
_indent_next_listItem_by = 0
else:
s += _indent(i, 0, None, True)
if id is not None:
s += "\n\n.. inclusion-end-marker-do-not-remove %s\n\n" % id
return s
def listitem(el):
_supports_only(
el, ["para", "simpara", "{http://www.w3.org/2001/XInclude}include"])
return _block_separated_with_blank_line(el)
# sections
def example(el):
# FIXME: too hacky?
elements = [i for i in el]
first, rest = elements[0], elements[1:]
return _make_title(_concat(first), 4) + "\n\n" + "".join(_conv(i) for i in rest)
def sect1(el):
return _block_separated_with_blank_line(el)
def sect2(el):
return _block_separated_with_blank_line(el)
def sect3(el):
return _block_separated_with_blank_line(el)
def sect4(el):
return _block_separated_with_blank_line(el)
def section(el):
return _block_separated_with_blank_line(el)
def title(el):
return _make_title(_concat(el).strip(), _get_level(el) + 1)
# bibliographic elements
def author(el):
_has_only_text(el)
return "\n\n.. _author:\n\n**%s**" % el.text
def date(el):
_has_only_text(el)
return "\n\n.. _date:\n\n%s" % el.text
# references
def citerefentry(el):
project = el.get("project")
refentrytitle = el.xpath("refentrytitle")[0].text
manvolnum = el.xpath("manvolnum")[0].text
extlink_formats = {
'man-pages': f':man-pages:`{refentrytitle}({manvolnum})`',
'die-net': f':die-net:`{refentrytitle}({manvolnum})`',
'mankier': f':mankier:`{refentrytitle}({manvolnum})`',
'archlinux': f':archlinux:`{refentrytitle}({manvolnum})`',
'debian': f':debian:`{refentrytitle}({manvolnum})`',
'freebsd': f':freebsd:`{refentrytitle}({manvolnum})`',
'dbus': f':dbus:`{refentrytitle}({manvolnum})`',
}
if project in extlink_formats:
return extlink_formats[project]
if project == 'url':
url = el.get("url")
return f"`{refentrytitle}({manvolnum}) <{url}>`_"
return f":ref:`{refentrytitle}({manvolnum})`"
def refmeta(el):
refentrytitle = el.find('refentrytitle').text
manvolnum = el.find('manvolnum').text
meta_title = f":title: {refentrytitle}"
meta_manvolnum = f":manvolnum: {manvolnum}"
doc_title = ".. _%s:" % _join_children(
el, '') + '\n\n' + _make_title(_join_children(el, ''), 1)
return '\n\n'.join([meta_title, meta_manvolnum, doc_title])
def refentrytitle(el):
if el.get("url"):
return ulink(el)
else:
return _concat(el)
def manvolnum(el):
return "(%s)" % el.text
# media objects
def imageobject(el):
return _indent(el, 3, ".. image:: ", True)
def imagedata(el):
_has_no_text(el)
return el.get("fileref")
def videoobject(el):
return _indent(el, 3, ".. raw:: html\n\n", True)
def videodata(el):
_has_no_text(el)
src = el.get("fileref")
return ' <video src="%s" controls>\n' % src + \
' Your browser does not support the <code>video</code> element.\n' + \
' </video>'
def programlisting(el):
xi_include = el.find('.//{http://www.w3.org/2001/XInclude}include')
if xi_include is not None:
return _includes(xi_include)
else:
return f"\n\n.. code-block:: sh\n\n \n\n{_indent(el, 3)}\n\n"
def screen(el):
return _indent(el, 3, "::\n\n", False) + "\n\n"
def synopsis(el):
return _indent(el, 3, "::\n\n", False) + "\n\n"
def funcsynopsis(el):
return _concat(el)
def funcsynopsisinfo(el):
return "``%s``" % _concat(el)
def funcprototype(el):
funcdef = ''.join(el.find('.//funcdef').itertext())
params = el.findall('.//paramdef')
param_list = [''.join(param.itertext()) for param in params]
s = ".. code-block:: \n\n "
s += f"{funcdef}("
s += ",\n\t".join(param_list)
s += ");"
return s
def paramdef(el):
return el
def funcdef(el):
return el
def function(el):
return _concat(el).strip()
def parameter(el):
return el
def table(el):
title = _concat(el.find('title'))
headers = el.findall('.//thead/row/entry')
rows = el.findall('.//tbody/row')
# Collect header names
header_texts = [_concat(header) for header in headers]
# Collect row data
row_data = []
for row in rows:
entries = row.findall('entry')
row_data.append([_concat(entry) for entry in entries])
# Create the table in reST list-table format
rst_table = []
rst_table.append(f".. list-table:: {title}")
rst_table.append(" :header-rows: 1")
rst_table.append("")
# Add header row
header_line = " * - " + "\n - ".join(header_texts)
rst_table.append(header_line)
# Add rows
for row in row_data:
row_line = " * - " + "\n - ".join(row)
rst_table.append(row_line)
return '\n'.join(rst_table)
def userinput(el):
return _indent(el, 3, "\n\n")
def computeroutput(el):
return _indent(el, 3, "\n\n")
# miscellaneous
def keycombo(el):
return _join_children(el, ' + ')
def keycap(el):
return ":kbd:`%s`" % el.text
def warning(el):
return ".. warning::`%s`" % el.text
def para(el):
return _block_separated_with_blank_line(el) + '\n\n \n\n'
def simpara(el):
return _block_separated_with_blank_line(el)
def important(el):
return _indent(el, 3, ".. note:: ", True)
def itemizedlist(el):
return _indent(el, 2, "* ", True)
def orderedlist(el):
return _indent(el, 2, "1. ", True)
def refsect1(el):
return _block_separated_with_blank_line(el)
def refsect2(el):
return _block_separated_with_blank_line(el)
def refsect3(el):
return _block_separated_with_blank_line(el)
def refsect4(el):
return _block_separated_with_blank_line(el)
def refsect5(el):
return _block_separated_with_blank_line(el)
def convert_xml_to_rst(xml_file_path, output_dir):
try:
_run(xml_file_path, output_dir)
return list(_not_handled_tags), ''
except Exception as e:
_warn('Failed to convert file %s' % xml_file_path)
return [], str(e)

179
doc-migration/main.py Normal file
View File

@ -0,0 +1,179 @@
# SPDX-License-Identifier: LGPL-2.1-or-later
import os
import json
import argparse
from typing import List
from db2rst import convert_xml_to_rst
FILES_USED_FOR_INCLUDES = [
'sd_journal_get_data.xml', 'standard-options.xml', 'user-system-options.xml',
'common-variables.xml', 'standard-conf.xml', 'libsystemd-pkgconfig.xml', 'threads-aware.xml'
]
INCLUDES_DIR = "includes"
def load_files_from_json(json_path: str) -> List[str]:
"""
Loads a list of filenames from a JSON file.
Parameters:
json_path (str): Path to the JSON file.
Returns:
List[str]: List of filenames.
"""
if not os.path.isfile(json_path):
print(f"Error: The file '{json_path}' does not exist.")
return []
with open(json_path, 'r') as json_file:
data = json.load(json_file)
return [entry['file'] for entry in data]
def update_json_file(json_path: str, updated_entries: List[dict]) -> None:
"""
Updates a JSON file with new entries.
Parameters:
json_path (str): Path to the JSON file.
updated_entries (List[dict]): List of updated entries to write to the JSON file.
"""
with open(json_path, 'w') as json_file:
json.dump(updated_entries, json_file, indent=4)
def process_xml_files_in_directory(dir: str, output_dir: str, specific_file: str = None, errored: bool = False, unhandled_only: bool = False) -> None:
"""
Processes all XML files in a specified directory, logs results to a JSON file.
Parameters:
dir (str): Path to the directory containing XML files.
output_dir (str): Path to the JSON file for logging results.
specific_file (str, optional): Specific XML file to process. Defaults to None.
errored (bool, optional): Flag to process only files listed in errors.json. Defaults to False.
unhandled_only (bool, optional): Flag to process only files listed in successes_with_unhandled_tags.json. Defaults to False.
"""
files_output_dir = os.path.join(output_dir, "files")
includes_output_dir = os.path.join(output_dir, INCLUDES_DIR)
os.makedirs(files_output_dir, exist_ok=True)
os.makedirs(includes_output_dir, exist_ok=True)
files_to_process = []
if errored:
errors_json_path = os.path.join(output_dir, "errors.json")
files_to_process = load_files_from_json(errors_json_path)
if not files_to_process:
print("No files to process from errors.json. Exiting.")
return
elif unhandled_only:
unhandled_json_path = os.path.join(
output_dir, "successes_with_unhandled_tags.json")
files_to_process = load_files_from_json(unhandled_json_path)
if not files_to_process:
print("No files to process from successes_with_unhandled_tags.json. Exiting.")
return
elif specific_file:
specific_file_path = os.path.join(dir, specific_file)
if os.path.isfile(specific_file_path):
files_to_process = [specific_file]
else:
print(f"Error: The file '{
specific_file}' does not exist in the directory '{dir}'.")
return
else:
files_to_process = [f for f in os.listdir(dir) if f.endswith(".xml")]
errors_json_path = os.path.join(output_dir, "errors.json")
unhandled_json_path = os.path.join(
output_dir, "successes_with_unhandled_tags.json")
existing_errors = []
existing_unhandled = []
if os.path.exists(errors_json_path):
with open(errors_json_path, 'r') as json_file:
existing_errors = json.load(json_file)
if os.path.exists(unhandled_json_path):
with open(unhandled_json_path, 'r') as json_file:
existing_unhandled = json.load(json_file)
updated_errors = []
updated_successes_with_unhandled_tags = []
for filename in files_to_process:
filepath = os.path.join(dir, filename)
output_subdir = includes_output_dir if filename in FILES_USED_FOR_INCLUDES else files_output_dir
print('converting file: ', filename)
try:
unhandled_tags, error = convert_xml_to_rst(filepath, output_subdir)
if error:
result = {
"file": filename,
"status": "error",
"unhandled_tags": unhandled_tags,
"error": error
}
updated_errors.append(result)
else:
result = {
"file": filename,
"status": "success",
"unhandled_tags": unhandled_tags,
"error": error
}
if len(unhandled_tags) > 0:
updated_successes_with_unhandled_tags.append(result)
existing_errors = [
entry for entry in existing_errors if entry['file'] != filename]
existing_unhandled = [
entry for entry in existing_unhandled if entry['file'] != filename]
except Exception as e:
result = {
"file": filename,
"status": "error",
"unhandled_tags": [],
"error": str(e)
}
updated_errors.append(result)
if not errored:
updated_errors += existing_errors
if not unhandled_only:
updated_successes_with_unhandled_tags += existing_unhandled
update_json_file(errors_json_path, updated_errors)
update_json_file(unhandled_json_path,
updated_successes_with_unhandled_tags)
def main():
parser = argparse.ArgumentParser(
description="Process XML files and save results to a directory.")
parser.add_argument(
"--dir", type=str, help="Path to the directory containing XML files.", default="../man")
parser.add_argument(
"--output", type=str, help="Path to the output directory for results and log files.", default="in-progress")
parser.add_argument(
"--file", type=str, help="If provided, the script will only process the specified file.", default=None)
parser.add_argument("--errored", action='store_true',
help="Process only files listed in errors.json.")
parser.add_argument("--unhandled-only", action='store_true',
help="Process only files listed in successes_with_unhandled_tags.json.")
args = parser.parse_args()
process_xml_files_in_directory(
args.dir, args.output, args.file, args.errored, args.unhandled_only)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,63 @@
import os
from sphinx.application import Sphinx
from sphinx.util.console import bold
from sphinx.util.typing import ExtensionMetadata
def generate_toctree(app: Sphinx):
root_dir = app.srcdir
index_path = os.path.join(root_dir, 'index.rst')
if not os.path.exists(index_path):
app.logger.warning(
f"{index_path} does not exist, skipping generation.")
return
with open(index_path, 'w') as index_file:
index_file.write(""".. SPDX-License-Identifier: LGPL-2.1-or-later
.. systemd documentation master file, created by
sphinx-quickstart on Wed Jun 26 16:24:13 2024.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
systemd System and Service Manager
===================================
.. manual reference to a doc by its reference label
see: https://www.sphinx-doc.org/en/master/usage/referencing.html#cross-referencing-arbitrary-locations
.. Manual links
.. ------------
.. :ref:`busctl(1)`
.. :ref:`systemd(1)`
.. OR using the toctree to pull in files
https://www.sphinx-doc.org/en/master/usage/restructuredtext/directives.html#directive-toctree
.. This only works if we restructure our headings to match
https://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html#sections
and then only have single top-level heading with the command name
.. toctree::
:maxdepth: 1\n
""")
for subdir, _, files in os.walk(root_dir + '/docs'):
if subdir == root_dir:
continue
for file in files:
if file.endswith('.rst'):
file_path = os.path.relpath(
os.path.join(subdir, file), root_dir)
# remove the .rst extension
index_file.write(f" {file_path[:-4]}\n")
index_file.write("""
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search` """)
def setup(app: Sphinx) -> ExtensionMetadata:
app.connect('builder-inited', generate_toctree)
return {'version': '0.1', 'parallel_read_safe': True, 'parallel_write_safe': True, }

View File

@ -0,0 +1,196 @@
# SPDX-License-Identifier: LGPL-2.1-or-later
from __future__ import annotations
from typing import List, Dict, Any
from docutils import nodes
from sphinx.locale import _
from sphinx.application import Sphinx
from sphinx.util.docutils import SphinxRole, SphinxDirective
from sphinx.util.typing import ExtensionMetadata
class directive_list(nodes.General, nodes.Element):
pass
class InlineDirectiveRole(SphinxRole):
def run(self) -> tuple[List[nodes.Node], List[nodes.system_message]]:
target_id = f'directive-{self.env.new_serialno("directive")}-{
self.text}'
target_node = nodes.target('', self.text, ids=[target_id])
if not hasattr(self.env, 'directives'):
self.env.directives = []
self.env.directives.append({
'name': self.name,
'text': self.text,
'docname': self.env.docname,
'lineno': self.lineno,
'target_id': target_id,
})
return [target_node], []
class ListDirectiveRoles(SphinxDirective):
def run(self) -> List[nodes.Node]:
return [directive_list('')]
def register_directive_roles(app: Sphinx) -> None:
directives_data: List[Dict[str, Any]] = app.config.directives_data
role_types: List[str] = app.config.role_types
for directive in directives_data:
dir_id: str = directive['id']
for role_type in role_types:
role_name = f'directive:{dir_id}:{role_type}'
app.add_role(role_name, InlineDirectiveRole())
def get_directive_metadata(app: Sphinx) -> Dict[str, Dict[str, Any]]:
directives_data: List[Dict[str, Any]] = app.config.directives_data
return {directive['id']: directive for directive in directives_data}
def group_directives_by_id(env) -> Dict[str, List[Dict[str, Any]]]:
grouped_directives: Dict[str, List[Dict[str, Any]]] = {}
for dir_info in getattr(env, 'directives', []):
dir_id = dir_info['name'].split(':')[1]
if dir_id not in grouped_directives:
grouped_directives[dir_id] = []
grouped_directives[dir_id].append(dir_info)
return grouped_directives
def create_reference_node(app: Sphinx, dir_info: Dict[str, Any], from_doc_name: str) -> nodes.reference:
ref_node = nodes.reference('', '')
ref_node['refdocname'] = dir_info['docname']
ref_node['refuri'] = app.builder.get_relative_uri(
from_doc_name, dir_info['docname']) + '#' + dir_info['target_id']
metadata: Dict[str, Any] = app.builder.env.metadata.get(
dir_info['docname'], {})
title: str = metadata.get('title', 'Unknown Title')
manvolnum: str = metadata.get('manvolnum', 'Unknown Volume')
ref_node.append(nodes.Text(f'{title}({manvolnum})'))
return ref_node
def render_reference_node(references: List[nodes.reference]) -> nodes.paragraph:
para = nodes.inline()
for i, ref_node in enumerate(references):
para += ref_node
if i < len(references) - 1:
para += nodes.Text(", ")
return para
def render_option(directive_text: str, references: List[nodes.reference]) -> nodes.section:
section = nodes.section()
title = nodes.title(text=directive_text, classes=['directive-header'])
title_id = nodes.make_id(directive_text)
title['ids'] = [title_id]
title['names'] = [directive_text]
section['ids'] = [title_id]
section += title
node = render_reference_node(references)
section += node
return section
def render_variable(directive_text: str, references: List[nodes.reference]) -> nodes.section:
section = nodes.section()
title = nodes.title(text=directive_text, classes=['directive-header'])
title_id = nodes.make_id(directive_text)
title['ids'] = [title_id]
title['names'] = [directive_text]
section['ids'] = [title_id]
section += title
node = render_reference_node(references)
section += node
return section
def render_constant(directive_text: str, references: List[nodes.reference]) -> nodes.section:
section = nodes.section()
title = nodes.title(text=directive_text, classes=['directive-header'])
title_id = nodes.make_id(directive_text)
title['ids'] = [title_id]
title['names'] = [directive_text]
section['ids'] = [title_id]
section += title
node = render_reference_node(references)
section += node
return section
def process_items(app: Sphinx, doctree: nodes.document, from_doc_name: str) -> None:
env = app.builder.env
directive_lookup: Dict[str, Dict[str, Any]] = get_directive_metadata(app)
grouped_directives: Dict[str, List[Dict[str, Any]]
] = group_directives_by_id(env)
render_map = {
'option': render_option,
'var': render_variable,
'constant': render_constant,
}
for node in doctree.findall(directive_list):
content: List[nodes.section] = []
for dir_id, directives in grouped_directives.items():
directive_meta = directive_lookup.get(
dir_id, {'title': 'Unknown', 'description': 'No description available.'})
section = nodes.section(ids=[dir_id])
section += nodes.title(text=directive_meta['title'])
section += nodes.paragraph(text=directive_meta['description'])
directive_references: Dict[str, List[nodes.reference]] = {}
for dir_info in directives:
directive_text: str = dir_info['text']
role_type: str = dir_info['name'].split(':')[-1]
if directive_text not in directive_references:
directive_references[directive_text] = []
ref_node = create_reference_node(app, dir_info, from_doc_name)
directive_references[directive_text].append(ref_node)
for directive_text, references in directive_references.items():
render_fn = render_map.get(role_type, render_option)
rendered_section = render_fn(directive_text, references)
section += rendered_section
content.append(section)
node.replace_self(content)
def setup(app: Sphinx) -> ExtensionMetadata:
app.add_config_value('directives_data', [], 'env')
app.add_config_value('role_types', [], 'env')
register_directive_roles(app)
app.add_directive('list_directive_roles', ListDirectiveRoles)
app.connect('doctree-resolved', process_items)
return {
'version': '0.1',
'parallel_read_safe': True,
'parallel_write_safe': True,
}

View File

@ -0,0 +1,59 @@
from typing import List, Dict, Tuple, Any
from docutils import nodes
from docutils.parsers.rst import roles, states
import re
# Define the extlink_formats dictionary with type annotations
extlink_formats: Dict[str, str] = {
'man-pages': 'https://man7.org/linux/man-pages/man{manvolnum}/{refentrytitle}.{manvolnum}.html',
'die-net': 'http://linux.die.net/man/{manvolnum}/{refentrytitle}',
'mankier': 'https://www.mankier.com/{manvolnum}/{refentrytitle}',
'archlinux': 'https://man.archlinux.org/man/{refentrytitle}.{manvolnum}.en.html',
'debian': 'https://manpages.debian.org/unstable/{refentrytitle}/{refentrytitle}.{manvolnum}.en.html',
'freebsd': 'https://www.freebsd.org/cgi/man.cgi?query={refentrytitle}&sektion={manvolnum}',
'dbus': 'https://dbus.freedesktop.org/doc/dbus-specification.html#{refentrytitle}',
}
def man_role(
name: str,
rawtext: str,
text: str,
lineno: int,
inliner: states.Inliner,
options: Dict[str, Any] = {}
) -> Tuple[List[nodes.reference], List[nodes.system_message]]:
# Regex to match text like 'locale(7)'
pattern = re.compile(r'(.+)\((\d+)\)')
match = pattern.match(text)
if not match:
msg = inliner.reporter.error(
f'Invalid man page format {text}, expected format "name(section)"',
nodes.literal_block(rawtext, rawtext),
line=lineno
)
return [inliner.problematic(rawtext, rawtext, msg)], [msg]
refentrytitle, manvolnum = match.groups()
if name not in extlink_formats:
msg = inliner.reporter.error(
f'Unknown man page role {name}',
nodes.literal_block(rawtext, rawtext),
line=lineno
)
return [inliner.problematic(rawtext, rawtext, msg)], [msg]
url = extlink_formats[name].format(
manvolnum=manvolnum, refentrytitle=refentrytitle
)
node = nodes.reference(
rawtext, f'{refentrytitle}({manvolnum})', refuri=url, **options
)
return [node], []
def setup(app: Any) -> Dict[str, bool]:
for role in extlink_formats.keys():
roles.register_local_role(role, man_role)
return {'parallel_read_safe': True, 'parallel_write_safe': True}

View File

@ -0,0 +1,28 @@
/* SPDX-License-Identifier: LGPL-2.1-or-later */
.sidebar-logo {
margin-inline: 0;
}
section {
margin-block-end: 2em;
}
/* Make right sidebar wider to accomodate long titles */
.toc-drawer {
width: 100%;
}
/* Make Toc section headers bold */
.toc-tree li a:has(+ ul) {
font-weight: 600;
}
.sig-name,
.sig-prename {
color: var(--color-content-foreground);
}
.std.option {
margin-left: 2rem;
}

View File

@ -0,0 +1,7 @@
<svg xmlns="http://www.w3.org/2000/svg" width="202" height="26" viewBox="0 0 202 26" id="systemd-logo">
<!-- SPDX-License-Identifier: LGPL-2.1-or-later -->
<path d="M0 0v26h10v-4H4V4h6V0zm76 0v4h6v18h-6v4h10V0z" fill="currentColor"/>
<path d="M113.498 14.926q-4.5-.96-4.5-3.878 0-1.079.609-1.981.621-.902 1.781-1.441 1.16-.54 2.707-.54 1.63 0 2.848.528 1.219.516 1.875 1.453.656.926.656 2.121h-3.539q0-.762-.457-1.183-.457-.434-1.394-.434-.774 0-1.243.363-.457.364-.457.938 0 .55.516.89.527.34 1.781.575 1.5.28 2.543.738 1.043.445 1.653 1.242.62.797.62 2.027 0 1.114-.667 2.004-.657.88-1.887 1.383-1.219.504-2.836.504-1.711 0-2.965-.621-1.242-.633-1.898-1.617-.645-.985-.645-2.051h3.34q.036.914.656 1.36.621.433 1.594.433.902 0 1.383-.34.492-.351.492-.937 0-.364-.223-.61-.21-.258-.773-.48-.55-.223-1.57-.446zm19.384-7.606l-5.086 14.58q-.293.831-.726 1.523-.434.703-1.266 1.195-.832.504-2.098.504-.457 0-.75-.048-.281-.046-.785-.176v-2.672q.176.02.527.02.95 0 1.418-.293.47-.293.715-.961l.352-.926-4.43-12.738h3.797l2.262 7.687 2.285-7.687zm5.884 7.606q-4.5-.96-4.5-3.878 0-1.079.61-1.981.62-.902 1.781-1.441 1.16-.54 2.707-.54 1.629 0 2.848.528 1.218.516 1.875 1.453.656.926.656 2.121h-3.539q0-.762-.457-1.183-.457-.434-1.395-.434-.773 0-1.242.363-.457.364-.457.938 0 .55.516.89.527.34 1.781.575 1.5.28 2.543.738 1.043.445 1.652 1.242.621.797.621 2.027 0 1.114-.668 2.004-.656.88-1.886 1.383-1.219.504-2.836.504-1.711 0-2.965-.621-1.242-.633-1.899-1.617-.644-.985-.644-2.051h3.34q.036.914.656 1.36.621.433 1.594.433.902 0 1.383-.34.492-.351.492-.937 0-.364-.223-.61-.21-.258-.773-.48-.551-.223-1.57-.446zm13.983 2.403q.574 0 .984-.082v2.66q-.914.328-2.086.328-3.727 0-3.727-3.797V9.899h-1.793V7.321h1.793v-3.14h3.54v3.14h2.132v2.578h-2.133v6.129q0 .75.293 1.031.293.27.997.27zm14.228-2.519h-8.016q.2 1.183.985 1.886.785.691 2.015.691.914 0 1.688-.34.785-.351 1.336-1.042l1.699 1.957q-.668.96-1.957 1.617-1.278.656-3 .656-1.946 0-3.387-.82-1.43-.82-2.203-2.227-.762-1.406-.762-3.105v-.446q0-1.898.715-3.386.715-1.489 2.063-2.32 1.347-.844 3.187-.844 1.793 0 3.059.761 1.265.762 1.922 2.168.656 1.395.656 3.293zm-3.469-2.65q-.024-1.03-.574-1.628-.54-.598-1.617-.598-1.008 0-1.582.668-.563.668-.739 1.84h4.512zm19.923-5.073q1.934 0 2.989 1.148 1.054 1.148 1.054 3.727v8.039h-3.539V11.95q0-.797-.21-1.23-.212-.446-.61-.61-.387-.164-.984-.164-.715 0-1.219.352-.504.34-.797.972.02.082.02.27V20h-3.54v-8.015q0-.797-.21-1.242-.211-.445-.61-.621-.386-.176-.996-.176-.68 0-1.183.304-.492.293-.797.844V20h-3.539V7.32h3.316l.118 1.419q.633-.797 1.547-1.22.926-.433 2.086-.433 1.172 0 2.016.48.855.47 1.312 1.442.633-.926 1.582-1.418.961-.504 2.203-.504zM201.398 2v18h-3.187l-.176-1.359q-1.243 1.594-3.212 1.594-1.535 0-2.66-.82-1.113-.832-1.699-2.285-.574-1.454-.574-3.317v-.246q0-1.934.574-3.398.586-1.465 1.7-2.274 1.124-.808 2.683-.808 1.805 0 3.012 1.37V2.001zm-5.672 15.376q1.488 0 2.133-1.266v-4.898q-.61-1.266-2.11-1.266-1.207 0-1.77.984-.55.985-.55 2.637v.246q0 1.629.54 2.602.55.96 1.757.96z" fill="currentColor"/>
<path d="M45 13L63 3v20z" fill="#30d475"/>
<circle cx="30" cy="13" r="9" fill="#30d475"/>
</svg>

After

Width:  |  Height:  |  Size: 3.1 KiB

View File

@ -0,0 +1,43 @@
/* SPDX-License-Identifier: MIT-0 */
#define _GNU_SOURCE 1
#include <assert.h>
#include <stdio.h>
#include <unistd.h>
#include <systemd/sd-event.h>
int main(int argc, char **argv) {
pid_t pid = fork();
assert(pid >= 0);
/* SIGCHLD signal must be blocked for sd_event_add_child to work */
sigset_t ss;
sigemptyset(&ss);
sigaddset(&ss, SIGCHLD);
sigprocmask(SIG_BLOCK, &ss, NULL);
if (pid == 0) /* child */
sleep(1);
else { /* parent */
sd_event *e = NULL;
int r;
/* Create the default event loop */
sd_event_default(&e);
assert(e);
/* We create a floating child event source (attached to 'e').
* The default handler will be called with 666 as userdata, which
* will become the exit value of the loop. */
r = sd_event_add_child(e, NULL, pid, WEXITED, NULL, (void*) 666);
assert(r >= 0);
r = sd_event_loop(e);
assert(r == 666);
sd_event_unref(e);
}
return 0;
}

View File

@ -0,0 +1,48 @@
/* SPDX-License-Identifier: MIT-0 */
#include <stdlib.h>
#include <glib.h>
#include <systemd/sd-event.h>
typedef struct SDEventSource {
GSource source;
GPollFD pollfd;
sd_event *event;
} SDEventSource;
static gboolean event_prepare(GSource *source, gint *timeout_) {
return sd_event_prepare(((SDEventSource *)source)->event) > 0;
}
static gboolean event_check(GSource *source) {
return sd_event_wait(((SDEventSource *)source)->event, 0) > 0;
}
static gboolean event_dispatch(GSource *source, GSourceFunc callback, gpointer user_data) {
return sd_event_dispatch(((SDEventSource *)source)->event) > 0;
}
static void event_finalize(GSource *source) {
sd_event_unref(((SDEventSource *)source)->event);
}
static GSourceFuncs event_funcs = {
.prepare = event_prepare,
.check = event_check,
.dispatch = event_dispatch,
.finalize = event_finalize,
};
GSource *g_sd_event_create_source(sd_event *event) {
SDEventSource *source;
source = (SDEventSource *)g_source_new(&event_funcs, sizeof(SDEventSource));
source->event = sd_event_ref(event);
source->pollfd.fd = sd_event_get_fd(event);
source->pollfd.events = G_IO_IN | G_IO_HUP | G_IO_ERR;
g_source_add_poll((GSource *)source, &source->pollfd);
return (GSource *)source;
}

View File

@ -0,0 +1,31 @@
/* SPDX-License-Identifier: MIT-0 */
#define _GNU_SOURCE 1
#include <stdio.h>
#include <stdint.h>
#include <systemd/sd-hwdb.h>
int print_usb_properties(uint16_t vid, uint16_t pid) {
char match[128];
sd_hwdb *hwdb;
const char *key, *value;
int r;
/* Match this USB vendor and product ID combination */
snprintf(match, sizeof match, "usb:v%04Xp%04X", vid, pid);
r = sd_hwdb_new(&hwdb);
if (r < 0)
return r;
SD_HWDB_FOREACH_PROPERTY(hwdb, match, key, value)
printf("%s: \"%s\"\"%s\"\n", match, key, value);
sd_hwdb_unref(hwdb);
return 0;
}
int main(int argc, char **argv) {
print_usb_properties(0x046D, 0xC534);
return 0;
}

View File

@ -0,0 +1,13 @@
/* SPDX-License-Identifier: MIT-0 */
#include <stdio.h>
#include <systemd/sd-id128.h>
#define OUR_APPLICATION_ID SD_ID128_MAKE(c2,73,27,73,23,db,45,4e,a6,3b,b9,6e,79,b5,3e,97)
int main(int argc, char *argv[]) {
sd_id128_t id;
sd_id128_get_machine_app_specific(OUR_APPLICATION_ID, &id);
printf("Our application ID: " SD_ID128_FORMAT_STR "\n", SD_ID128_FORMAT_VAL(id));
return 0;
}

View File

@ -0,0 +1,58 @@
/* SPDX-License-Identifier: MIT-0 */
#include <stdio.h>
#include <string.h>
#include <sys/inotify.h>
#include <systemd/sd-event.h>
#define _cleanup_(f) __attribute__((cleanup(f)))
static int inotify_handler(sd_event_source *source,
const struct inotify_event *event,
void *userdata) {
const char *desc = NULL;
sd_event_source_get_description(source, &desc);
if (event->mask & IN_Q_OVERFLOW)
printf("inotify-handler <%s>: overflow\n", desc);
else if (event->mask & IN_CREATE)
printf("inotify-handler <%s>: create on %s\n", desc, event->name);
else if (event->mask & IN_DELETE)
printf("inotify-handler <%s>: delete on %s\n", desc, event->name);
else if (event->mask & IN_MOVED_TO)
printf("inotify-handler <%s>: moved-to on %s\n", desc, event->name);
/* Terminate the program if an "exit" file appears */
if ((event->mask & (IN_CREATE|IN_MOVED_TO)) &&
strcmp(event->name, "exit") == 0)
sd_event_exit(sd_event_source_get_event(source), 0);
return 1;
}
int main(int argc, char **argv) {
_cleanup_(sd_event_unrefp) sd_event *event = NULL;
_cleanup_(sd_event_source_unrefp) sd_event_source *source1 = NULL, *source2 = NULL;
const char *path1 = argc > 1 ? argv[1] : "/tmp";
const char *path2 = argc > 2 ? argv[2] : NULL;
/* Note: failure handling is omitted for brevity */
sd_event_default(&event);
sd_event_add_inotify(event, &source1, path1,
IN_CREATE | IN_DELETE | IN_MODIFY | IN_MOVED_TO,
inotify_handler, NULL);
if (path2)
sd_event_add_inotify(event, &source2, path2,
IN_CREATE | IN_DELETE | IN_MODIFY | IN_MOVED_TO,
inotify_handler, NULL);
sd_event_loop(event);
return 0;
}

View File

@ -0,0 +1,21 @@
/* SPDX-License-Identifier: MIT-0 */
#include <errno.h>
#include <stdio.h>
#include <systemd/sd-journal.h>
int main(int argc, char *argv[]) {
sd_journal *j;
const char *field;
int r;
r = sd_journal_open(&j, SD_JOURNAL_LOCAL_ONLY);
if (r < 0) {
fprintf(stderr, "Failed to open journal: %s\n", strerror(-r));
return 1;
}
SD_JOURNAL_FOREACH_FIELD(j, field)
printf("%s\n", field);
sd_journal_close(j);
return 0;
}

View File

@ -0,0 +1,30 @@
/* SPDX-License-Identifier: MIT-0 */
#include <errno.h>
#include <stdio.h>
#include <systemd/sd-journal.h>
int main(int argc, char *argv[]) {
int r;
sd_journal *j;
r = sd_journal_open(&j, SD_JOURNAL_LOCAL_ONLY);
if (r < 0) {
fprintf(stderr, "Failed to open journal: %s\n", strerror(-r));
return 1;
}
SD_JOURNAL_FOREACH(j) {
const char *d;
size_t l;
r = sd_journal_get_data(j, "MESSAGE", (const void **)&d, &l);
if (r < 0) {
fprintf(stderr, "Failed to read message field: %s\n", strerror(-r));
continue;
}
printf("%.*s\n", (int) l, d);
}
sd_journal_close(j);
return 0;
}

View File

@ -0,0 +1,28 @@
/* SPDX-License-Identifier: MIT-0 */
#define _GNU_SOURCE 1
#include <poll.h>
#include <time.h>
#include <systemd/sd-journal.h>
int wait_for_changes(sd_journal *j) {
uint64_t t;
int msec;
struct pollfd pollfd;
sd_journal_get_timeout(j, &t);
if (t == (uint64_t) -1)
msec = -1;
else {
struct timespec ts;
uint64_t n;
clock_gettime(CLOCK_MONOTONIC, &ts);
n = (uint64_t) ts.tv_sec * 1000000 + ts.tv_nsec / 1000;
msec = t > n ? (int) ((t - n + 999) / 1000) : 0;
}
pollfd.fd = sd_journal_get_fd(j);
pollfd.events = sd_journal_get_events(j);
poll(&pollfd, 1, msec);
return sd_journal_process(j);
}

View File

@ -0,0 +1,27 @@
/* SPDX-License-Identifier: MIT-0 */
#include <errno.h>
#include <stdio.h>
#include <systemd/sd-journal.h>
int main(int argc, char *argv[]) {
sd_journal *j;
const void *d;
size_t l;
int r;
r = sd_journal_open(&j, SD_JOURNAL_LOCAL_ONLY);
if (r < 0) {
fprintf(stderr, "Failed to open journal: %s\n", strerror(-r));
return 1;
}
r = sd_journal_query_unique(j, "_SYSTEMD_UNIT");
if (r < 0) {
fprintf(stderr, "Failed to query journal: %s\n", strerror(-r));
return 1;
}
SD_JOURNAL_FOREACH_UNIQUE(j, d, l)
printf("%.*s\n", (int) l, (const char*) d);
sd_journal_close(j);
return 0;
}

View File

@ -0,0 +1,44 @@
/* SPDX-License-Identifier: MIT-0 */
#include <errno.h>
#include <stdio.h>
#include <systemd/sd-journal.h>
int main(int argc, char *argv[]) {
int r;
sd_journal *j;
r = sd_journal_open(&j, SD_JOURNAL_LOCAL_ONLY);
if (r < 0) {
fprintf(stderr, "Failed to open journal: %s\n", strerror(-r));
return 1;
}
for (;;) {
const void *d;
size_t l;
r = sd_journal_next(j);
if (r < 0) {
fprintf(stderr, "Failed to iterate to next entry: %s\n", strerror(-r));
break;
}
if (r == 0) {
/* Reached the end, let's wait for changes, and try again */
r = sd_journal_wait(j, (uint64_t) -1);
if (r < 0) {
fprintf(stderr, "Failed to wait for changes: %s\n", strerror(-r));
break;
}
continue;
}
r = sd_journal_get_data(j, "MESSAGE", &d, &l);
if (r < 0) {
fprintf(stderr, "Failed to read message field: %s\n", strerror(-r));
continue;
}
printf("%.*s\n", (int) l, (const char*) d);
}
sd_journal_close(j);
return 0;
}

View File

@ -0,0 +1,31 @@
/* SPDX-License-Identifier: MIT-0 */
#define _GNU_SOURCE 1
#include <errno.h>
#include <syslog.h>
#include <stdio.h>
#include <unistd.h>
#include <systemd/sd-journal.h>
#include <systemd/sd-daemon.h>
int main(int argc, char *argv[]) {
int fd;
FILE *log;
fd = sd_journal_stream_fd("test", LOG_INFO, 1);
if (fd < 0) {
fprintf(stderr, "Failed to create stream fd: %s\n", strerror(-fd));
return 1;
}
log = fdopen(fd, "w");
if (!log) {
fprintf(stderr, "Failed to create file object: %s\n", strerror(errno));
close(fd);
return 1;
}
fprintf(log, "Hello World!\n");
fprintf(log, SD_WARNING "This is a warning!\n");
fclose(log);
return 0;
}

View File

@ -0,0 +1,251 @@
/* SPDX-License-Identifier: MIT-0 */
/* Implements the LogControl1 interface as per specification:
* https://www.freedesktop.org/software/systemd/man/org.freedesktop.LogControl1.html
*
* Compile with 'cc logcontrol-example.c $(pkg-config --libs --cflags libsystemd)'
*
* To get and set properties via busctl:
*
* $ busctl --user get-property org.freedesktop.Example \
* /org/freedesktop/LogControl1 \
* org.freedesktop.LogControl1 \
* SyslogIdentifier
* s "example"
* $ busctl --user get-property org.freedesktop.Example \
* /org/freedesktop/LogControl1 \
* org.freedesktop.LogControl1 \
* LogTarget
* s "journal"
* $ busctl --user get-property org.freedesktop.Example \
* /org/freedesktop/LogControl1 \
* org.freedesktop.LogControl1 \
* LogLevel
* s "info"
* $ busctl --user set-property org.freedesktop.Example \
* /org/freedesktop/LogControl1 \
* org.freedesktop.LogControl1 \
* LogLevel \
* "s" debug
* $ busctl --user get-property org.freedesktop.Example \
* /org/freedesktop/LogControl1 \
* org.freedesktop.LogControl1 \
* LogLevel
* s "debug"
*/
#include <errno.h>
#include <stdlib.h>
#include <stdio.h>
#include <syslog.h>
#include <systemd/sd-bus.h>
#include <systemd/sd-journal.h>
#define _cleanup_(f) __attribute__((cleanup(f)))
static int log_error(int log_level, int error, const char *str) {
sd_journal_print(log_level, "%s failed: %s", str, strerror(-error));
return error;
}
typedef enum LogTarget {
LOG_TARGET_JOURNAL,
LOG_TARGET_KMSG,
LOG_TARGET_SYSLOG,
LOG_TARGET_CONSOLE,
_LOG_TARGET_MAX,
} LogTarget;
static const char* const log_target_table[_LOG_TARGET_MAX] = {
[LOG_TARGET_JOURNAL] = "journal",
[LOG_TARGET_KMSG] = "kmsg",
[LOG_TARGET_SYSLOG] = "syslog",
[LOG_TARGET_CONSOLE] = "console",
};
static const char* const log_level_table[LOG_DEBUG + 1] = {
[LOG_EMERG] = "emerg",
[LOG_ALERT] = "alert",
[LOG_CRIT] = "crit",
[LOG_ERR] = "err",
[LOG_WARNING] = "warning",
[LOG_NOTICE] = "notice",
[LOG_INFO] = "info",
[LOG_DEBUG] = "debug",
};
typedef struct object {
const char *syslog_identifier;
LogTarget log_target;
int log_level;
} object;
static int property_get(
sd_bus *bus,
const char *path,
const char *interface,
const char *property,
sd_bus_message *reply,
void *userdata,
sd_bus_error *error) {
object *o = userdata;
if (strcmp(property, "LogLevel") == 0)
return sd_bus_message_append(reply, "s", log_level_table[o->log_level]);
if (strcmp(property, "LogTarget") == 0)
return sd_bus_message_append(reply, "s", log_target_table[o->log_target]);
if (strcmp(property, "SyslogIdentifier") == 0)
return sd_bus_message_append(reply, "s", o->syslog_identifier);
return sd_bus_error_setf(error,
SD_BUS_ERROR_UNKNOWN_PROPERTY,
"Unknown property '%s'",
property);
}
static int property_set(
sd_bus *bus,
const char *path,
const char *interface,
const char *property,
sd_bus_message *message,
void *userdata,
sd_bus_error *error) {
object *o = userdata;
const char *value;
int r;
r = sd_bus_message_read(message, "s", &value);
if (r < 0)
return r;
if (strcmp(property, "LogLevel") == 0) {
int i;
for (i = 0; i < LOG_DEBUG + 1; i++)
if (strcmp(value, log_level_table[i]) == 0) {
o->log_level = i;
setlogmask(LOG_UPTO(i));
return 0;
}
return sd_bus_error_setf(error,
SD_BUS_ERROR_INVALID_ARGS,
"Invalid value for LogLevel: '%s'",
value);
}
if (strcmp(property, "LogTarget") == 0) {
LogTarget i;
for (i = 0; i < _LOG_TARGET_MAX; i++)
if (strcmp(value, log_target_table[i]) == 0) {
o->log_target = i;
return 0;
}
return sd_bus_error_setf(error,
SD_BUS_ERROR_INVALID_ARGS,
"Invalid value for LogTarget: '%s'",
value);
}
return sd_bus_error_setf(error,
SD_BUS_ERROR_UNKNOWN_PROPERTY,
"Unknown property '%s'",
property);
}
/* https://www.freedesktop.org/software/systemd/man/sd_bus_add_object.html
*/
static const sd_bus_vtable vtable[] = {
SD_BUS_VTABLE_START(0),
SD_BUS_WRITABLE_PROPERTY(
"LogLevel", "s",
property_get, property_set,
0,
0),
SD_BUS_WRITABLE_PROPERTY(
"LogTarget", "s",
property_get, property_set,
0,
0),
SD_BUS_PROPERTY(
"SyslogIdentifier", "s",
property_get,
0,
SD_BUS_VTABLE_PROPERTY_CONST),
SD_BUS_VTABLE_END
};
int main(int argc, char **argv) {
/* The bus should be relinquished before the program terminates. The cleanup
* attribute allows us to do it nicely and cleanly whenever we exit the
* block.
*/
_cleanup_(sd_bus_flush_close_unrefp) sd_bus *bus = NULL;
object o = {
.log_level = LOG_INFO,
.log_target = LOG_TARGET_JOURNAL,
.syslog_identifier = "example",
};
int r;
/* https://man7.org/linux/man-pages/man3/setlogmask.3.html
* Programs using syslog() instead of sd_journal can use this API to cut logs
* emission at the source.
*/
setlogmask(LOG_UPTO(o.log_level));
/* Acquire a connection to the bus, letting the library work out the details.
* https://www.freedesktop.org/software/systemd/man/sd_bus_default.html
*/
r = sd_bus_default(&bus);
if (r < 0)
return log_error(o.log_level, r, "sd_bus_default()");
/* Publish an interface on the bus, specifying our well-known object access
* path and public interface name.
* https://www.freedesktop.org/software/systemd/man/sd_bus_add_object.html
* https://dbus.freedesktop.org/doc/dbus-tutorial.html
*/
r = sd_bus_add_object_vtable(bus, NULL,
"/org/freedesktop/LogControl1",
"org.freedesktop.LogControl1",
vtable,
&o);
if (r < 0)
return log_error(o.log_level, r, "sd_bus_add_object_vtable()");
/* By default the service is assigned an ephemeral name. Also add a fixed
* one, so that clients know whom to call.
* https://www.freedesktop.org/software/systemd/man/sd_bus_request_name.html
*/
r = sd_bus_request_name(bus, "org.freedesktop.Example", 0);
if (r < 0)
return log_error(o.log_level, r, "sd_bus_request_name()");
for (;;) {
/* https://www.freedesktop.org/software/systemd/man/sd_bus_wait.html
*/
r = sd_bus_wait(bus, UINT64_MAX);
if (r < 0)
return log_error(o.log_level, r, "sd_bus_wait()");
/* https://www.freedesktop.org/software/systemd/man/sd_bus_process.html
*/
r = sd_bus_process(bus, NULL);
if (r < 0)
return log_error(o.log_level, r, "sd_bus_process()");
}
/* https://www.freedesktop.org/software/systemd/man/sd_bus_release_name.html
*/
r = sd_bus_release_name(bus, "org.freedesktop.Example");
if (r < 0)
return log_error(o.log_level, r, "sd_bus_release_name()");
return 0;
}

View File

@ -0,0 +1,188 @@
/* SPDX-License-Identifier: MIT-0 */
/* Implement the systemd notify protocol without external dependencies.
* Supports both readiness notification on startup and on reloading,
* according to the protocol defined at:
* https://www.freedesktop.org/software/systemd/man/latest/sd_notify.html
* This protocol is guaranteed to be stable as per:
* https://systemd.io/PORTABILITY_AND_STABILITY/ */
#define _GNU_SOURCE 1
#include <errno.h>
#include <inttypes.h>
#include <signal.h>
#include <stdbool.h>
#include <stddef.h>
#include <stdlib.h>
#include <stdio.h>
#include <sys/socket.h>
#include <sys/un.h>
#include <time.h>
#include <unistd.h>
#define _cleanup_(f) __attribute__((cleanup(f)))
static void closep(int *fd) {
if (!fd || *fd < 0)
return;
close(*fd);
*fd = -1;
}
static int notify(const char *message) {
union sockaddr_union {
struct sockaddr sa;
struct sockaddr_un sun;
} socket_addr = {
.sun.sun_family = AF_UNIX,
};
size_t path_length, message_length;
_cleanup_(closep) int fd = -1;
const char *socket_path;
/* Verify the argument first */
if (!message)
return -EINVAL;
message_length = strlen(message);
if (message_length == 0)
return -EINVAL;
/* If the variable is not set, the protocol is a noop */
socket_path = getenv("NOTIFY_SOCKET");
if (!socket_path)
return 0; /* Not set? Nothing to do */
/* Only AF_UNIX is supported, with path or abstract sockets */
if (socket_path[0] != '/' && socket_path[0] != '@')
return -EAFNOSUPPORT;
path_length = strlen(socket_path);
/* Ensure there is room for NUL byte */
if (path_length >= sizeof(socket_addr.sun.sun_path))
return -E2BIG;
memcpy(socket_addr.sun.sun_path, socket_path, path_length);
/* Support for abstract socket */
if (socket_addr.sun.sun_path[0] == '@')
socket_addr.sun.sun_path[0] = 0;
fd = socket(AF_UNIX, SOCK_DGRAM|SOCK_CLOEXEC, 0);
if (fd < 0)
return -errno;
if (connect(fd, &socket_addr.sa, offsetof(struct sockaddr_un, sun_path) + path_length) != 0)
return -errno;
ssize_t written = write(fd, message, message_length);
if (written != (ssize_t) message_length)
return written < 0 ? -errno : -EPROTO;
return 1; /* Notified! */
}
static int notify_ready(void) {
return notify("READY=1");
}
static int notify_reloading(void) {
/* A buffer with length sufficient to format the maximum UINT64 value. */
char reload_message[sizeof("RELOADING=1\nMONOTONIC_USEC=18446744073709551615")];
struct timespec ts;
uint64_t now;
/* Notify systemd that we are reloading, including a CLOCK_MONOTONIC timestamp in usec
* so that the program is compatible with a Type=notify-reload service. */
if (clock_gettime(CLOCK_MONOTONIC, &ts) < 0)
return -errno;
if (ts.tv_sec < 0 || ts.tv_nsec < 0 ||
(uint64_t) ts.tv_sec > (UINT64_MAX - (ts.tv_nsec / 1000ULL)) / 1000000ULL)
return -EINVAL;
now = (uint64_t) ts.tv_sec * 1000000ULL + (uint64_t) ts.tv_nsec / 1000ULL;
if (snprintf(reload_message, sizeof(reload_message), "RELOADING=1\nMONOTONIC_USEC=%" PRIu64, now) < 0)
return -EINVAL;
return notify(reload_message);
}
static int notify_stopping(void) {
return notify("STOPPING=1");
}
static volatile sig_atomic_t reloading = 0;
static volatile sig_atomic_t terminating = 0;
static void signal_handler(int sig) {
if (sig == SIGHUP)
reloading = 1;
else if (sig == SIGINT || sig == SIGTERM)
terminating = 1;
}
int main(int argc, char **argv) {
struct sigaction sa = {
.sa_handler = signal_handler,
.sa_flags = SA_RESTART,
};
int r;
/* Setup signal handlers */
sigemptyset(&sa.sa_mask);
sigaction(SIGHUP, &sa, NULL);
sigaction(SIGINT, &sa, NULL);
sigaction(SIGTERM, &sa, NULL);
/* Do more service initialization work here … */
/* Now that all the preparations steps are done, signal readiness */
r = notify_ready();
if (r < 0) {
fprintf(stderr, "Failed to notify readiness to $NOTIFY_SOCKET: %s\n", strerror(-r));
return EXIT_FAILURE;
}
while (!terminating) {
if (reloading) {
reloading = false;
/* As a separate but related feature, we can also notify the manager
* when reloading configuration. This allows accurate state-tracking,
* and also automated hook-in of 'systemctl reload' without having to
* specify manually an ExecReload= line in the unit file. */
r = notify_reloading();
if (r < 0) {
fprintf(stderr, "Failed to notify reloading to $NOTIFY_SOCKET: %s\n", strerror(-r));
return EXIT_FAILURE;
}
/* Do some reconfiguration work here … */
r = notify_ready();
if (r < 0) {
fprintf(stderr, "Failed to notify readiness to $NOTIFY_SOCKET: %s\n", strerror(-r));
return EXIT_FAILURE;
}
}
/* Do some daemon work here … */
sleep(5);
}
r = notify_stopping();
if (r < 0) {
fprintf(stderr, "Failed to report termination to $NOTIFY_SOCKET: %s\n", strerror(-r));
return EXIT_FAILURE;
}
/* Do some shutdown work here … */
return EXIT_SUCCESS;
}

View File

@ -0,0 +1,19 @@
/* SPDX-License-Identifier: MIT-0 */
#include <stdio.h>
#include <stdlib.h>
#include <systemd/sd-path.h>
int main(void) {
int r;
char *t;
r = sd_path_lookup(SD_PATH_USER_DOCUMENTS, NULL, &t);
if (r < 0)
return EXIT_FAILURE;
printf("~/Documents: %s\n", t);
free(t);
return EXIT_SUCCESS;
}

View File

@ -0,0 +1,50 @@
/* SPDX-License-Identifier: MIT-0 */
/* This is equivalent to:
* busctl call org.freedesktop.systemd1 /org/freedesktop/systemd1 \
* org.freedesktop.systemd1.Manager GetUnitByPID $$
*
* Compile with 'cc print-unit-path-call-method.c -lsystemd'
*/
#include <errno.h>
#include <stdio.h>
#include <sys/types.h>
#include <unistd.h>
#include <systemd/sd-bus.h>
#define _cleanup_(f) __attribute__((cleanup(f)))
#define DESTINATION "org.freedesktop.systemd1"
#define PATH "/org/freedesktop/systemd1"
#define INTERFACE "org.freedesktop.systemd1.Manager"
#define MEMBER "GetUnitByPID"
static int log_error(int error, const char *message) {
fprintf(stderr, "%s: %s\n", message, strerror(-error));
return error;
}
int main(int argc, char **argv) {
_cleanup_(sd_bus_flush_close_unrefp) sd_bus *bus = NULL;
_cleanup_(sd_bus_error_free) sd_bus_error error = SD_BUS_ERROR_NULL;
_cleanup_(sd_bus_message_unrefp) sd_bus_message *reply = NULL;
int r;
r = sd_bus_open_system(&bus);
if (r < 0)
return log_error(r, "Failed to acquire bus");
r = sd_bus_call_method(bus, DESTINATION, PATH, INTERFACE, MEMBER, &error, &reply, "u", (unsigned) getpid());
if (r < 0)
return log_error(r, MEMBER " call failed");
const char *ans;
r = sd_bus_message_read(reply, "o", &ans);
if (r < 0)
return log_error(r, "Failed to read reply");
printf("Unit path is \"%s\".\n", ans);
return 0;
}

View File

@ -0,0 +1,59 @@
/* SPDX-License-Identifier: MIT-0 */
/* This is equivalent to:
* busctl call org.freedesktop.systemd1 /org/freedesktop/systemd1 \
* org.freedesktop.systemd1.Manager GetUnitByPID $$
*
* Compile with 'cc print-unit-path.c -lsystemd'
*/
#include <errno.h>
#include <stdio.h>
#include <sys/types.h>
#include <unistd.h>
#include <systemd/sd-bus.h>
#define _cleanup_(f) __attribute__((cleanup(f)))
#define DESTINATION "org.freedesktop.systemd1"
#define PATH "/org/freedesktop/systemd1"
#define INTERFACE "org.freedesktop.systemd1.Manager"
#define MEMBER "GetUnitByPID"
static int log_error(int error, const char *message) {
fprintf(stderr, "%s: %s\n", message, strerror(-error));
return error;
}
int main(int argc, char **argv) {
_cleanup_(sd_bus_flush_close_unrefp) sd_bus *bus = NULL;
_cleanup_(sd_bus_error_free) sd_bus_error error = SD_BUS_ERROR_NULL;
_cleanup_(sd_bus_message_unrefp) sd_bus_message *reply = NULL, *m = NULL;
int r;
r = sd_bus_open_system(&bus);
if (r < 0)
return log_error(r, "Failed to acquire bus");
r = sd_bus_message_new_method_call(bus, &m,
DESTINATION, PATH, INTERFACE, MEMBER);
if (r < 0)
return log_error(r, "Failed to create bus message");
r = sd_bus_message_append(m, "u", (unsigned) getpid());
if (r < 0)
return log_error(r, "Failed to append to bus message");
r = sd_bus_call(bus, m, -1, &error, &reply);
if (r < 0)
return log_error(r, MEMBER " call failed");
const char *ans;
r = sd_bus_message_read(reply, "o", &ans);
if (r < 0)
return log_error(r, "Failed to read reply");
printf("Unit path is \"%s\".\n", ans);
return 0;
}

View File

@ -0,0 +1,20 @@
/* SPDX-License-Identifier: MIT-0 */
#include <systemd/sd-bus.h>
int append_strings_to_message(sd_bus_message *m, const char *const *arr) {
const char *s;
int r;
r = sd_bus_message_open_container(m, 'a', "s");
if (r < 0)
return r;
for (s = *arr; *s; s++) {
r = sd_bus_message_append(m, "s", s);
if (r < 0)
return r;
}
return sd_bus_message_close_container(m);
}

View File

@ -0,0 +1,27 @@
/* SPDX-License-Identifier: MIT-0 */
#include <stdio.h>
#include <systemd/sd-bus.h>
int read_strings_from_message(sd_bus_message *m) {
int r;
r = sd_bus_message_enter_container(m, 'a', "s");
if (r < 0)
return r;
for (;;) {
const char *s;
r = sd_bus_message_read(m, "s", &s);
if (r < 0)
return r;
if (r == 0)
break;
printf("%s\n", s);
}
return sd_bus_message_exit_container(m);
}

View File

@ -0,0 +1,18 @@
/* SPDX-License-Identifier: MIT-0 */
#include <errno.h>
#include <string.h>
#include <unistd.h>
#include <systemd/sd-bus.h>
int writer_with_negative_errno_return(int fd, sd_bus_error *error) {
const char *message = "Hello, World!\n";
ssize_t n = write(fd, message, strlen(message));
if (n >= 0)
return n; /* On success, return the number of bytes written, possibly 0. */
/* On error, initialize the error structure, and also propagate the errno
* value that write(2) set for us. */
return sd_bus_error_set_errnof(error, errno, "Failed to write to fd %i: %s", fd, strerror(errno));
}

View File

@ -0,0 +1,143 @@
/* SPDX-License-Identifier: MIT-0 */
#define _GNU_SOURCE 1
#include <errno.h>
#include <stdbool.h>
#include <stddef.h>
#include <stdlib.h>
#include <stdio.h>
#include <systemd/sd-bus.h>
#define _cleanup_(f) __attribute__((cleanup(f)))
typedef struct object {
char *name;
uint32_t number;
} object;
static int method(sd_bus_message *m, void *userdata, sd_bus_error *error) {
int r;
printf("Got called with userdata=%p\n", userdata);
if (sd_bus_message_is_method_call(m,
"org.freedesktop.systemd.VtableExample",
"Method4"))
return 1;
const char *string;
r = sd_bus_message_read(m, "s", &string);
if (r < 0) {
fprintf(stderr, "sd_bus_message_read() failed: %s\n", strerror(-r));
return 0;
}
r = sd_bus_reply_method_return(m, "s", string);
if (r < 0) {
fprintf(stderr, "sd_bus_reply_method_return() failed: %s\n", strerror(-r));
return 0;
}
return 1;
}
static const sd_bus_vtable vtable[] = {
SD_BUS_VTABLE_START(0),
SD_BUS_METHOD(
"Method1", "s", "s", method, 0),
SD_BUS_METHOD_WITH_NAMES_OFFSET(
"Method2",
"so", SD_BUS_PARAM(string) SD_BUS_PARAM(path),
"s", SD_BUS_PARAM(returnstring),
method, offsetof(object, number),
SD_BUS_VTABLE_DEPRECATED),
SD_BUS_METHOD_WITH_ARGS_OFFSET(
"Method3",
SD_BUS_ARGS("s", string, "o", path),
SD_BUS_RESULT("s", returnstring),
method, offsetof(object, number),
SD_BUS_VTABLE_UNPRIVILEGED),
SD_BUS_METHOD_WITH_ARGS(
"Method4",
SD_BUS_NO_ARGS,
SD_BUS_NO_RESULT,
method,
SD_BUS_VTABLE_UNPRIVILEGED),
SD_BUS_SIGNAL(
"Signal1",
"so",
0),
SD_BUS_SIGNAL_WITH_NAMES(
"Signal2",
"so", SD_BUS_PARAM(string) SD_BUS_PARAM(path),
0),
SD_BUS_SIGNAL_WITH_ARGS(
"Signal3",
SD_BUS_ARGS("s", string, "o", path),
0),
SD_BUS_WRITABLE_PROPERTY(
"AutomaticStringProperty", "s", NULL, NULL,
offsetof(object, name),
SD_BUS_VTABLE_PROPERTY_EMITS_CHANGE),
SD_BUS_WRITABLE_PROPERTY(
"AutomaticIntegerProperty", "u", NULL, NULL,
offsetof(object, number),
SD_BUS_VTABLE_PROPERTY_EMITS_INVALIDATION),
SD_BUS_VTABLE_END
};
int main(int argc, char **argv) {
_cleanup_(sd_bus_flush_close_unrefp) sd_bus *bus = NULL;
int r;
sd_bus_default(&bus);
object object = { .number = 666 };
object.name = strdup("name");
if (!object.name) {
fprintf(stderr, "OOM\n");
return EXIT_FAILURE;
}
r = sd_bus_add_object_vtable(bus, NULL,
"/org/freedesktop/systemd/VtableExample",
"org.freedesktop.systemd.VtableExample",
vtable,
&object);
if (r < 0) {
fprintf(stderr, "sd_bus_add_object_vtable() failed: %s\n", strerror(-r));
return EXIT_FAILURE;
}
r = sd_bus_request_name(bus,
"org.freedesktop.systemd.VtableExample",
0);
if (r < 0) {
fprintf(stderr, "sd_bus_request_name() failed: %s\n", strerror(-r));
return EXIT_FAILURE;
}
for (;;) {
r = sd_bus_wait(bus, UINT64_MAX);
if (r < 0) {
fprintf(stderr, "sd_bus_wait() failed: %s\n", strerror(-r));
return EXIT_FAILURE;
}
r = sd_bus_process(bus, NULL);
if (r < 0) {
fprintf(stderr, "sd_bus_process() failed: %s\n", strerror(-r));
return EXIT_FAILURE;
}
}
r = sd_bus_release_name(bus, "org.freedesktop.systemd.VtableExample");
if (r < 0) {
fprintf(stderr, "sd_bus_release_name() failed: %s\n", strerror(-r));
return EXIT_FAILURE;
}
free(object.name);
return 0;
}

View File

@ -0,0 +1,41 @@
#!/usr/bin/env python3
# SPDX-License-Identifier: MIT-0
"""
Proof-of-concept systemd environment generator that makes sure that bin dirs
are always after matching sbin dirs in the path.
(Changes /sbin:/bin:/foo/bar to /bin:/sbin:/foo/bar.)
This generator shows how to override the configuration possibly created by
earlier generators. It would be easier to write in bash, but let's have it
in Python just to prove that we can, and to serve as a template for more
interesting generators.
"""
import os
import pathlib
def rearrange_bin_sbin(path):
"""Make sure any pair of …/bin, …/sbin directories is in this order
>>> rearrange_bin_sbin('/bin:/sbin:/usr/sbin:/usr/bin')
'/bin:/sbin:/usr/bin:/usr/sbin'
"""
items = [pathlib.Path(p) for p in path.split(':')]
for i in range(len(items)):
if 'sbin' in items[i].parts:
ind = items[i].parts.index('sbin')
bin = pathlib.Path(*items[i].parts[:ind], 'bin', *items[i].parts[ind+1:])
if bin in items[i+1:]:
j = i + 1 + items[i+1:].index(bin)
items[i], items[j] = items[j], items[i]
return ':'.join(p.as_posix() for p in items)
if __name__ == '__main__':
path = os.environ['PATH'] # This should be always set.
# If it's not, we'll just crash, which is OK too.
new = rearrange_bin_sbin(path)
if new != path:
print('PATH={}'.format(new))

View File

@ -0,0 +1,12 @@
#!/usr/bin/python
# SPDX-License-Identifier: MIT-0
import platform
os_release = platform.freedesktop_os_release()
pretty_name = os_release.get('PRETTY_NAME', 'Linux')
print(f'Running on {pretty_name!r}')
if 'fedora' in [os_release.get('ID', 'linux'),
*os_release.get('ID_LIKE', '').split()]:
print('Looks like Fedora!')

View File

@ -0,0 +1,37 @@
#!/usr/bin/python
# SPDX-License-Identifier: MIT-0
import ast
import re
import sys
def read_os_release():
try:
filename = '/etc/os-release'
f = open(filename)
except FileNotFoundError:
filename = '/usr/lib/os-release'
f = open(filename)
for line_number, line in enumerate(f, start=1):
line = line.rstrip()
if not line or line.startswith('#'):
continue
m = re.match(r'([A-Z][A-Z_0-9]+)=(.*)', line)
if m:
name, val = m.groups()
if val and val[0] in '"\'':
val = ast.literal_eval(val)
yield name, val
else:
print(f'{filename}:{line_number}: bad line {line!r}',
file=sys.stderr)
os_release = dict(read_os_release())
pretty_name = os_release.get('PRETTY_NAME', 'Linux')
print(f'Running on {pretty_name!r}')
if 'debian' in [os_release.get('ID', 'linux'),
*os_release.get('ID_LIKE', '').split()]:
print('Looks like Debian!')

View File

@ -0,0 +1,104 @@
#!/usr/bin/env python3
# SPDX-License-Identifier: MIT-0
#
# Implement the systemd notify protocol without external dependencies.
# Supports both readiness notification on startup and on reloading,
# according to the protocol defined at:
# https://www.freedesktop.org/software/systemd/man/latest/sd_notify.html
# This protocol is guaranteed to be stable as per:
# https://systemd.io/PORTABILITY_AND_STABILITY/
import errno
import os
import signal
import socket
import sys
import time
reloading = False
terminating = False
def notify(message):
if not message:
raise ValueError("notify() requires a message")
socket_path = os.environ.get("NOTIFY_SOCKET")
if not socket_path:
return
if socket_path[0] not in ("/", "@"):
raise OSError(errno.EAFNOSUPPORT, "Unsupported socket type")
# Handle abstract socket.
if socket_path[0] == "@":
socket_path = "\0" + socket_path[1:]
with socket.socket(socket.AF_UNIX, socket.SOCK_DGRAM | socket.SOCK_CLOEXEC) as sock:
sock.connect(socket_path)
sock.sendall(message)
def notify_ready():
notify(b"READY=1")
def notify_reloading():
microsecs = time.clock_gettime_ns(time.CLOCK_MONOTONIC) // 1000
notify(f"RELOADING=1\nMONOTONIC_USEC={microsecs}".encode())
def notify_stopping():
notify(b"STOPPING=1")
def reload(signum, frame):
global reloading
reloading = True
def terminate(signum, frame):
global terminating
terminating = True
def main():
print("Doing initial setup")
global reloading, terminating
# Set up signal handlers.
print("Setting up signal handlers")
signal.signal(signal.SIGHUP, reload)
signal.signal(signal.SIGINT, terminate)
signal.signal(signal.SIGTERM, terminate)
# Do any other setup work here.
# Once all setup is done, signal readiness.
print("Done setting up")
notify_ready()
print("Starting loop")
while not terminating:
if reloading:
print("Reloading")
reloading = False
# Support notifying the manager when reloading configuration.
# This allows accurate state tracking as well as automatically
# enabling 'systemctl reload' without needing to manually
# specify an ExecReload= line in the unit file.
notify_reloading()
# Do some reconfiguration work here.
print("Done reloading")
notify_ready()
# Do the real work here ...
print("Sleeping for five seconds")
time.sleep(5)
print("Terminating")
notify_stopping()
if __name__ == "__main__":
sys.stdout.reconfigure(line_buffering=True)
print("Starting app")
main()
print("Stopped app")

View File

@ -0,0 +1,11 @@
#!/bin/sh -eu
# SPDX-License-Identifier: MIT-0
test -e /etc/os-release && os_release='/etc/os-release' || os_release='/usr/lib/os-release'
. "${os_release}"
echo "Running on ${PRETTY_NAME:-Linux}"
if [ "${ID:-linux}" = "debian" ] || [ "${ID_LIKE#*debian*}" != "${ID_LIKE}" ]; then
echo "Looks like Debian!"
fi

View File

@ -0,0 +1,212 @@
# SPDX-License-Identifier: LGPL-2.1-or-later
# Configuration file for the Sphinx documentation builder.
#
# For the full list of built-in configuration values, see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Project information -----------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information
import sys
import os
project = 'systemd'
copyright = '2024, systemd'
author = 'systemd'
sys.path.append(os.path.abspath("./_ext"))
# -- General configuration ---------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration
extensions = ['sphinxcontrib.globalsubs',
'directive_roles', 'external_man_links', 'autogen_index']
templates_path = ['_templates']
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
# -- Options for HTML output -------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-html-output
html_theme = 'furo'
html_static_path = ['_static']
html_title = ''
html_css_files = [
'css/custom.css',
]
html_theme_options = {
# TODO: update these `source`-options with the proper values
"source_repository": "https://github.com/neighbourhoodie/nh-systemd",
"source_branch": "man_pages_in_sphinx",
"source_directory": "doc-migration/source/",
"light_logo": "systemd-logo.svg",
"dark_logo": "systemd-logo.svg",
"light_css_variables": {
"color-brand-primary": "#35a764",
"color-brand-content": "#35a764",
},
}
man_pages = [
('busctl', 'busctl', 'Introspect the bus', None, '1'),
('journalctl', 'journalctl', 'Print log entries from the systemd journal', None, '1'),
('os-release', 'os-release', 'Operating system identification', None, '5'),
('systemd', 'systemd, init', 'systemd system and service manager', None, '1'),
]
global_substitutions = {f'v{n}': f'{n}' for n in range(183, 300)} | {
# Custom Entities
'MOUNT_PATH': '{{MOUNT_PATH}}',
'UMOUNT_PATH': '{{UMOUNT_PATH}}',
'SYSTEM_GENERATOR_DIR': '{{SYSTEM_GENERATOR_DIR}}',
'USER_GENERATOR_DIR': '{{USER_GENERATOR_DIR}}',
'SYSTEM_ENV_GENERATOR_DIR': '{{SYSTEM_ENV_GENERATOR_DIR}}',
'USER_ENV_GENERATOR_DIR': '{{USER_ENV_GENERATOR_DIR}}',
'CERTIFICATE_ROOT': '{{CERTIFICATE_ROOT}}',
'FALLBACK_HOSTNAME': '{{FALLBACK_HOSTNAME}}',
'MEMORY_ACCOUNTING_DEFAULT': "{{ 'yes' if MEMORY_ACCOUNTING_DEFAULT else 'no' }}",
'KILL_USER_PROCESSES': "{{ 'yes' if KILL_USER_PROCESSES else 'no' }}",
'DEBUGTTY': '{{DEBUGTTY}}',
'RC_LOCAL_PATH': '{{RC_LOCAL_PATH}}',
'HIGH_RLIMIT_NOFILE': '{{HIGH_RLIMIT_NOFILE}}',
'DEFAULT_DNSSEC_MODE': '{{DEFAULT_DNSSEC_MODE_STR}}',
'DEFAULT_DNS_OVER_TLS_MODE': '{{DEFAULT_DNS_OVER_TLS_MODE_STR}}',
'DEFAULT_TIMEOUT': '{{DEFAULT_TIMEOUT_SEC}} s',
'DEFAULT_USER_TIMEOUT': '{{DEFAULT_USER_TIMEOUT_SEC}} s',
'DEFAULT_KEYMAP': '{{SYSTEMD_DEFAULT_KEYMAP}}',
'fedora_latest_version': '40',
'fedora_cloud_release': '1.10',
}
# Existing lists of directive groups
directives_data = [
{
"id": "unit-directives",
"title": "Unit directives",
"description": "Directives for configuring units, used in unit files."
},
{
"id": "kernel-commandline-options",
"title": "Options on the kernel command line",
"description": "Kernel boot options for configuring the behaviour of the systemd process."
},
{
"id": "smbios-type-11-options",
"title": "SMBIOS Type 11 Variables",
"description": "Data passed from VMM to system via SMBIOS Type 11."
},
{
"id": "environment-variables",
"title": "Environment variables",
"description": "Environment variables understood by the systemd manager and other programs and environment variable-compatible settings."
},
{
"id": "system-credentials",
"title": "System Credentials",
"description": "System credentials understood by the system and service manager and various other components:"
},
{
"id": "efi-variables",
"title": "EFI variables",
"description": "EFI variables understood by\n "
},
{
"id": "home-directives",
"title": "Home Area/User Account directives",
"description": "Directives for configuring home areas and user accounts via\n "
},
{
"id": "udev-directives",
"title": "UDEV directives",
"description": "Directives for configuring systemd units through the udev database."
},
{
"id": "network-directives",
"title": "Network directives",
"description": "Directives for configuring network links through the net-setup-link udev builtin and networks\n through systemd-networkd."
},
{
"id": "journal-directives",
"title": "Journal fields",
"description": "Fields in the journal events with a well known meaning."
},
{
"id": "pam-directives",
"title": "PAM configuration directives",
"description": "Directives for configuring PAM behaviour."
},
{
"id": "fstab-options",
"title": 'fstab-options',
"description": "Options which influence mounted filesystems and encrypted volumes."
},
{
"id": "nspawn-directives",
"title": 'nspawn-directives',
"description": "Directives for configuring systemd-nspawn containers."
},
{
"id": "config-directives",
"title": "Program configuration options",
"description": "Directives for configuring the behaviour of the systemd process and other tools through\n configuration files."
},
{
"id": "options",
"title": "Command line options",
"description": "Command-line options accepted by programs in the systemd suite."
},
{
"id": "constants",
"title": "Constants",
"description": "Various constants used and/or defined by systemd."
},
{
"id": "dns",
"title": "DNS resource record types",
"description": "No description available"
},
{
"id": "miscellaneous",
"title": "Miscellaneous options and directives",
"description": "Other configuration elements which don't fit in any of the above groups."
},
{
"id": "specifiers",
"title": "Specifiers",
"description": "Short strings which are substituted in configuration directives."
},
{
"id": "filenames",
"title": "Files and directories",
"description": "Paths and file names referred to in the documentation."
},
{
"id": "dbus-interface",
"title": "D-Bus interfaces",
"description": "Interfaces exposed over D-Bus."
},
{
"id": "dbus-method",
"title": "D-Bus methods",
"description": "Methods exposed in the D-Bus interface."
},
{
"id": "dbus-property",
"title": "D-Bus properties",
"description": "Properties exposed in the D-Bus interface."
},
{
"id": "dbus-signal",
"title": "D-Bus signals",
"description": "Signals emitted in the D-Bus interface."
}
]
role_types = [
'constant',
'var',
'option'
]

View File

@ -0,0 +1,593 @@
.. SPDX-License-Identifier: LGPL-2.1-or-later:
:title: busctl
:manvolnum: 1
.. _busctl(1):
=========
busctl(1)
=========
.. only:: html
busctl — Introspect the bus
###########################
Synopsis
########
``busctl`` [OPTIONS...] [COMMAND] [<NAME>...]
Description
===========
``busctl`` may be used to
introspect and monitor the D-Bus bus.
Commands
========
The following commands are understood:
``list``
--------
Show all peers on the bus, by their service
names. By default, shows both unique and well-known names, but
this may be changed with the ``--unique`` and
``--acquired`` switches. This is the default
operation if no command is specified.
.. only:: html
.. versionadded:: 209
``status [<SERVICE>]``
----------------------
Show process information and credentials of a
bus service (if one is specified by its unique or well-known
name), a process (if one is specified by its numeric PID), or
the owner of the bus (if no parameter is
specified).
.. only:: html
.. versionadded:: 209
``monitor [<SERVICE>...]``
--------------------------
Dump messages being exchanged. If
<SERVICE> is specified, show messages
to or from this peer, identified by its well-known or unique
name. Otherwise, show all messages on the bus. Use
:kbd:`Ctrl` + :kbd:`C`
to terminate the dump.
.. only:: html
.. versionadded:: 209
``capture [<SERVICE>...]``
--------------------------
Similar to ``monitor`` but
writes the output in pcapng format (for details, see
`PCAP Next Generation (pcapng) Capture File Format <https://github.com/pcapng/pcapng/>`_).
Make sure to redirect standard output to a file or pipe. Tools like
:die-net:`wireshark(1)`
may be used to dissect and view the resulting
files.
.. only:: html
.. versionadded:: 218
``tree [<SERVICE>...]``
-----------------------
Shows an object tree of one or more
services. If <SERVICE> is specified,
show object tree of the specified services only. Otherwise,
show all object trees of all services on the bus that acquired
at least one well-known name.
.. only:: html
.. versionadded:: 218
``introspect <SERVICE> <OBJECT> [<INTERFACE>]``
-----------------------------------------------
Show interfaces, methods, properties and
signals of the specified object (identified by its path) on
the specified service. If the interface argument is passed, the
output is limited to members of the specified
interface.
.. only:: html
.. versionadded:: 218
``call <SERVICE> <OBJECT> <INTERFACE> <METHOD> [<SIGNATURE>[<ARGUMENT>...]]``
-----------------------------------------------------------------------------
Invoke a method and show the response. Takes a
service name, object path, interface name and method name. If
parameters shall be passed to the method call, a signature
string is required, followed by the arguments, individually
formatted as strings. For details on the formatting used, see
below. To suppress output of the returned data, use the
``--quiet`` option.
.. only:: html
.. versionadded:: 218
``emit <OBJECT> <INTERFACE> <SIGNAL> [<SIGNATURE>[<ARGUMENT>...]]``
-------------------------------------------------------------------
Emit a signal. Takes an object path, interface name and method name. If parameters
shall be passed, a signature string is required, followed by the arguments, individually formatted as
strings. For details on the formatting used, see below. To specify the destination of the signal,
use the ``--destination=`` option.
.. only:: html
.. versionadded:: 242
``get-property <SERVICE> <OBJECT> <INTERFACE> <PROPERTY>``
----------------------------------------------------------
Retrieve the current value of one or more
object properties. Takes a service name, object path,
interface name and property name. Multiple properties may be
specified at once, in which case their values will be shown one
after the other, separated by newlines. The output is, by
default, in terse format. Use ``--verbose`` for a
more elaborate output format.
.. only:: html
.. versionadded:: 218
``set-property <SERVICE> <OBJECT> <INTERFACE> <PROPERTY> <SIGNATURE> <ARGUMENT>``
---------------------------------------------------------------------------------
Set the current value of an object
property. Takes a service name, object path, interface name,
property name, property signature, followed by a list of
parameters formatted as strings.
.. only:: html
.. versionadded:: 218
``help``
--------
Show command syntax help.
.. only:: html
.. versionadded:: 209
Options
=======
The following options are understood:
``--address=<ADDRESS>``
-----------------------
Connect to the bus specified by
<ADDRESS> instead of using suitable
defaults for either the system or user bus (see
``--system`` and ``--user``
options).
.. only:: html
.. versionadded:: 209
``--show-machine``
------------------
When showing the list of peers, show a
column containing the names of containers they belong to.
See
:ref:`systemd-machined.service(8)`.
.. only:: html
.. versionadded:: 209
``--unique``
------------
When showing the list of peers, show only
"unique" names (of the form
":<number>.<number>").
.. only:: html
.. versionadded:: 209
``--acquired``
--------------
The opposite of ``--unique``
only "well-known" names will be shown.
.. only:: html
.. versionadded:: 209
``--activatable``
-----------------
When showing the list of peers, show only
peers which have actually not been activated yet, but may be
started automatically if accessed.
.. only:: html
.. versionadded:: 209
``--match=<MATCH>``
-------------------
When showing messages being exchanged, show only the
subset matching <MATCH>.
See
:ref:`sd_bus_add_match(3)`.
.. only:: html
.. versionadded:: 209
``--size=``
-----------
When used with the ``capture`` command,
specifies the maximum bus message size to capture
("snaplen"). Defaults to 4096 bytes.
.. only:: html
.. versionadded:: 218
``--list``
----------
When used with the ``tree`` command, shows a
flat list of object paths instead of a tree.
.. only:: html
.. versionadded:: 218
``-q, --quiet``
---------------
When used with the ``call`` command,
suppresses display of the response message payload. Note that even
if this option is specified, errors returned will still be
printed and the tool will indicate success or failure with
the process exit code.
.. only:: html
.. versionadded:: 218
``--verbose``
-------------
When used with the ``call`` or
``get-property`` command, shows output in a
more verbose format.
.. only:: html
.. versionadded:: 218
``--xml-interface``
-------------------
When used with the ``introspect`` call, dump the XML description received from
the D-Bus ``org.freedesktop.DBus.Introspectable.Introspect`` call instead of the
normal output.
.. only:: html
.. versionadded:: 243
``--json=<MODE>``
-----------------
When used with the ``call`` or ``get-property`` command, shows output
formatted as JSON. Expects one of "short" (for the shortest possible output without any
redundant whitespace or line breaks) or "pretty" (for a pretty version of the same, with
indentation and line breaks). Note that transformation from D-Bus marshalling to JSON is done in a loss-less
way, which means type information is embedded into the JSON object tree.
.. only:: html
.. versionadded:: 240
``-j``
------
Equivalent to ``--json=pretty`` when invoked interactively from a terminal. Otherwise
equivalent to ``--json=short``, in particular when the output is piped to some other
program.
.. only:: html
.. versionadded:: 240
``--expect-reply=<BOOL>``
-------------------------
When used with the ``call`` command,
specifies whether ``busctl`` shall wait for
completion of the method call, output the returned method
response data, and return success or failure via the process
exit code. If this is set to "no", the
method call will be issued but no response is expected, the
tool terminates immediately, and thus no response can be
shown, and no success or failure is returned via the exit
code. To only suppress output of the reply message payload,
use ``--quiet`` above. Defaults to
"yes".
.. only:: html
.. versionadded:: 218
``--auto-start=<BOOL>``
-----------------------
When used with the ``call`` or ``emit`` command, specifies
whether the method call should implicitly activate the
called service, should it not be running yet but is
configured to be auto-started. Defaults to
"yes".
.. only:: html
.. versionadded:: 218
``--allow-interactive-authorization=<BOOL>``
--------------------------------------------
When used with the ``call`` command,
specifies whether the services may enforce interactive
authorization while executing the operation, if the security
policy is configured for this. Defaults to
"yes".
.. only:: html
.. versionadded:: 218
``--timeout=<SECS>``
--------------------
When used with the ``call`` command,
specifies the maximum time to wait for method call
completion. If no time unit is specified, assumes
seconds. The usual other units are understood, too (ms, us,
s, min, h, d, w, month, y). Note that this timeout does not
apply if ``--expect-reply=no`` is used, as the
tool does not wait for any reply message then. When not
specified or when set to 0, the default of
"25s" is assumed.
.. only:: html
.. versionadded:: 218
``--augment-creds=<BOOL>``
--------------------------
Controls whether credential data reported by
``list`` or ``status`` shall
be augmented with data from
``/proc/``. When this is turned on, the data
shown is possibly inconsistent, as the data read from
``/proc/`` might be more recent than the rest of
the credential information. Defaults to "yes".
.. only:: html
.. versionadded:: 218
``--watch-bind=<BOOL>``
-----------------------
Controls whether to wait for the specified ``AF_UNIX`` bus socket to appear in the
file system before connecting to it. Defaults to off. When enabled, the tool will watch the file system until
the socket is created and then connect to it.
.. only:: html
.. versionadded:: 237
``--destination=<SERVICE>``
---------------------------
Takes a service name. When used with the ``emit`` command, a signal is
emitted to the specified service.
.. only:: html
.. versionadded:: 242
.. include:: ../includes/user-system-options.rst
:start-after: .. inclusion-marker-do-not-remove user
:end-before: .. inclusion-end-marker-do-not-remove user
.. include:: ../includes/user-system-options.rst
:start-after: .. inclusion-marker-do-not-remove system
:end-before: .. inclusion-end-marker-do-not-remove system
.. include:: ../includes/user-system-options.rst
:start-after: .. inclusion-marker-do-not-remove host
:end-before: .. inclusion-end-marker-do-not-remove host
.. include:: ../includes/user-system-options.rst
:start-after: .. inclusion-marker-do-not-remove machine
:end-before: .. inclusion-end-marker-do-not-remove machine
.. include:: ../includes/user-system-options.rst
:start-after: .. inclusion-marker-do-not-remove capsule
:end-before: .. inclusion-end-marker-do-not-remove capsule
``-l, --full``
--------------
Do not ellipsize the output in ``list`` command.
.. only:: html
.. versionadded:: 245
.. include:: ../includes/standard-options.rst
:start-after: .. inclusion-marker-do-not-remove no-pager
:end-before: .. inclusion-end-marker-do-not-remove no-pager
.. include:: ../includes/standard-options.rst
:start-after: .. inclusion-marker-do-not-remove no-legend
:end-before: .. inclusion-end-marker-do-not-remove no-legend
.. include:: ../includes/standard-options.rst
:start-after: .. inclusion-marker-do-not-remove help
:end-before: .. inclusion-end-marker-do-not-remove help
.. include:: ../includes/standard-options.rst
:start-after: .. inclusion-marker-do-not-remove version
:end-before: .. inclusion-end-marker-do-not-remove version
Parameter Formatting
====================
The ``call`` and
``set-property`` commands take a signature string
followed by a list of parameters formatted as string (for details
on D-Bus signature strings, see the `Type
system chapter of the D-Bus specification <https://dbus.freedesktop.org/doc/dbus-specification.html#type-system>`_). For simple
types, each parameter following the signature should simply be the
parameter's value formatted as string. Positive boolean values may
be formatted as "true", "yes",
"on", or "1"; negative boolean
values may be specified as "false",
"no", "off", or
"0". For arrays, a numeric argument for the
number of entries followed by the entries shall be specified. For
variants, the signature of the contents shall be specified,
followed by the contents. For dictionaries and structs, the
contents of them shall be directly specified.
For example,
.. code-block:: sh
s jawoll
is the formatting
of a single string "jawoll".
.. code-block:: sh
as 3 hello world foobar
is the formatting of a string array with three entries,
"hello", "world" and
"foobar".
.. code-block:: sh
a{sv} 3 One s Eins Two u 2 Yes b true
is the formatting of a dictionary
array that maps strings to variants, consisting of three
entries. The string "One" is assigned the
string "Eins". The string
"Two" is assigned the 32-bit unsigned
integer 2. The string "Yes" is assigned a
positive boolean.
Note that the ``call``,
``get-property``, ``introspect``
commands will also generate output in this format for the returned
data. Since this format is sometimes too terse to be easily
understood, the ``call`` and
``get-property`` commands may generate a more
verbose, multi-line output when passed the
``--verbose`` option.
Examples
========
Write and Read a Property
-------------------------
The following two commands first write a property and then
read it back. The property is found on the
"/org/freedesktop/systemd1" object of the
"org.freedesktop.systemd1" service. The name of
the property is "LogLevel" on the
"org.freedesktop.systemd1.Manager"
interface. The property contains a single string:
.. code-block:: sh
# busctl set-property org.freedesktop.systemd1 /org/freedesktop/systemd1 org.freedesktop.systemd1.Manager LogLevel s debug
# busctl get-property org.freedesktop.systemd1 /org/freedesktop/systemd1 org.freedesktop.systemd1.Manager LogLevel
s "debug"
Terse and Verbose Output
------------------------
The following two commands read a property that contains
an array of strings, and first show it in terse format, followed
by verbose format:
.. code-block:: sh
$ busctl get-property org.freedesktop.systemd1 /org/freedesktop/systemd1 org.freedesktop.systemd1.Manager Environment
as 2 "LANG=en_US.UTF-8" "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin"
$ busctl get-property --verbose org.freedesktop.systemd1 /org/freedesktop/systemd1 org.freedesktop.systemd1.Manager Environment
ARRAY "s" {
STRING "LANG=en_US.UTF-8";
STRING "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin";
};
Invoking a Method
-----------------
The following command invokes the
"StartUnit" method on the
"org.freedesktop.systemd1.Manager"
interface of the
"/org/freedesktop/systemd1" object
of the "org.freedesktop.systemd1"
service, and passes it two strings
"cups.service" and
"replace". As a result of the method
call, a single object path parameter is received and
shown:
.. code-block:: sh
# busctl call org.freedesktop.systemd1 /org/freedesktop/systemd1 org.freedesktop.systemd1.Manager StartUnit ss "cups.service" "replace"
o "/org/freedesktop/systemd1/job/42684"
See Also
========
:dbus:`dbus-daemon(1)`, `D-Bus <https://www.freedesktop.org/wiki/Software/dbus>`_, :ref:`sd-bus(3)`, :ref:`varlinkctl(1)`, :ref:`systemd(1)`, :ref:`machinectl(1)`, :die-net:`wireshark(1)`

View File

@ -0,0 +1,315 @@
.. SPDX-License-Identifier: LGPL-2.1-or-later:
:title: sd_journal_get_data
:manvolnum: 3
.. _sd_journal_get_data(3):
======================
sd_journal_get_data(3)
======================
.. only:: html
sd_journal_get_data — sd_journal_enumerate_data — sd_journal_enumerate_available_data — sd_journal_restart_data — SD_JOURNAL_FOREACH_DATA — sd_journal_set_data_threshold — sd_journal_get_data_threshold — Read data fields from the current journal entry
###########################################################################################################################################################################################################################################################
Synopsis
########
``#include <systemd/sd-journal.h>``
.. code-block::
int sd_journal_get_data(sd_journal *j,
const char *field,
const void **data,
size_t *length);
.. code-block::
int sd_journal_enumerate_data(sd_journal *j,
const void **data,
size_t *length);
.. code-block::
int sd_journal_enumerate_available_data(sd_journal *j,
const void **data,
size_t *length);
.. code-block::
void sd_journal_restart_data(sd_journal *j);
.. code-block::
SD_JOURNAL_FOREACH_DATA(sd_journal *j,
const void *data,
size_t length);
.. code-block::
int sd_journal_set_data_threshold(sd_journal *j,
size_t sz);
.. code-block::
int sd_journal_get_data_threshold(sd_journal *j,
size_t *sz);
Description
===========
sd_journal_get_data() gets the data object associated with a specific field
from the current journal entry. It takes four arguments: the journal context object, a string with the
field name to request, plus a pair of pointers to pointer/size variables where the data object and its
size shall be stored in. The field name should be an entry field name. Well-known field names are listed in
:ref:`systemd.journal-fields(7)`,
but any field can be specified. The returned data is in a read-only memory map and is only valid until
the next invocation of sd_journal_get_data(),
sd_journal_enumerate_data(),
sd_journal_enumerate_available_data(), or when the read pointer is altered. Note
that the data returned will be prefixed with the field name and "=". Also note that, by
default, data fields larger than 64K might get truncated to 64K. This threshold may be changed and turned
off with sd_journal_set_data_threshold() (see below).
sd_journal_enumerate_data() may be used
to iterate through all fields of the current entry. On each
invocation the data for the next field is returned. The order of
these fields is not defined. The data returned is in the same
format as with sd_journal_get_data() and also
follows the same life-time semantics.
sd_journal_enumerate_available_data() is similar to
sd_journal_enumerate_data(), but silently skips any fields which may be valid, but
are too large or not supported by current implementation.
sd_journal_restart_data() resets the
data enumeration index to the beginning of the entry. The next
invocation of sd_journal_enumerate_data()
will return the first field of the entry again.
Note that the SD_JOURNAL_FOREACH_DATA() macro may be used as a handy wrapper
around sd_journal_restart_data() and
sd_journal_enumerate_available_data().
Note that these functions will not work before
:ref:`sd_journal_next(3)`
(or related call) has been called at least once, in order to
position the read pointer at a valid entry.
sd_journal_set_data_threshold() may be
used to change the data field size threshold for data returned by
sd_journal_get_data(),
sd_journal_enumerate_data() and
sd_journal_enumerate_unique(). This threshold
is a hint only: it indicates that the client program is interested
only in the initial parts of the data fields, up to the threshold
in size — but the library might still return larger data objects.
That means applications should not rely exclusively on this
setting to limit the size of the data fields returned, but need to
apply an explicit size limit on the returned data as well. This
threshold defaults to 64K by default. To retrieve the complete
data fields this threshold should be turned off by setting it to
0, so that the library always returns the complete data objects.
It is recommended to set this threshold as low as possible since
this relieves the library from having to decompress large
compressed data objects in full.
sd_journal_get_data_threshold() returns
the currently configured data field size threshold.
Return Value
============
sd_journal_get_data() returns 0 on success or a negative errno-style error
code. sd_journal_enumerate_data() and
sd_journal_enumerate_available_data() return a positive integer if the next field
has been read, 0 when no more fields remain, or a negative errno-style error code.
sd_journal_restart_data() doesn't return anything.
sd_journal_set_data_threshold() and sd_journal_get_threshold()
return 0 on success or a negative errno-style error code.
Errors
------
Returned errors may indicate the following problems:
.. inclusion-marker-do-not-remove EINVAL
.. option:: -EINVAL
One of the required parameters is ``NULL`` or invalid.
.. only:: html
.. versionadded:: 246
.. inclusion-end-marker-do-not-remove EINVAL
.. inclusion-marker-do-not-remove ECHILD
.. option:: -ECHILD
The journal object was created in a different process, library or module instance.
.. only:: html
.. versionadded:: 246
.. inclusion-end-marker-do-not-remove ECHILD
.. inclusion-marker-do-not-remove EADDRNOTAVAIL
.. option:: -EADDRNOTAVAIL
The read pointer is not positioned at a valid entry;
:ref:`sd_journal_next(3)`
or a related call has not been called at least once.
.. only:: html
.. versionadded:: 246
.. inclusion-end-marker-do-not-remove EADDRNOTAVAIL
.. inclusion-marker-do-not-remove ENOENT
.. option:: -ENOENT
The current entry does not include the specified field.
.. only:: html
.. versionadded:: 246
.. inclusion-end-marker-do-not-remove ENOENT
.. inclusion-marker-do-not-remove ENOMEM
.. option:: -ENOMEM
Memory allocation failed.
.. only:: html
.. versionadded:: 246
.. inclusion-end-marker-do-not-remove ENOMEM
.. inclusion-marker-do-not-remove ENOBUFS
.. option:: -ENOBUFS
A compressed entry is too large.
.. only:: html
.. versionadded:: 246
.. inclusion-end-marker-do-not-remove ENOBUFS
.. inclusion-marker-do-not-remove E2BIG
.. option:: -E2BIG
The data field is too large for this computer architecture (e.g. above 4 GB on a
32-bit architecture).
.. only:: html
.. versionadded:: 246
.. inclusion-end-marker-do-not-remove E2BIG
.. inclusion-marker-do-not-remove EPROTONOSUPPORT
.. option:: -EPROTONOSUPPORT
The journal is compressed with an unsupported method or the journal uses an
unsupported feature.
.. only:: html
.. versionadded:: 246
.. inclusion-end-marker-do-not-remove EPROTONOSUPPORT
.. inclusion-marker-do-not-remove EBADMSG
.. option:: -EBADMSG
The journal is corrupted (possibly just the entry being iterated over).
.. only:: html
.. versionadded:: 246
.. inclusion-end-marker-do-not-remove EBADMSG
.. inclusion-marker-do-not-remove EIO
.. option:: -EIO
An I/O error was reported by the kernel.
.. only:: html
.. versionadded:: 246
.. inclusion-end-marker-do-not-remove EIO
Notes
=====
.. include:: ../includes/threads-aware.rst
:start-after: .. inclusion-marker-do-not-remove strict
:end-before: .. inclusion-end-marker-do-not-remove strict
.. include:: ../includes/libsystemd-pkgconfig.rst
:start-after: .. inclusion-marker-do-not-remove pkgconfig-text
:end-before: .. inclusion-end-marker-do-not-remove pkgconfig-text
Examples
========
See
:ref:`sd_journal_next(3)`
for a complete example how to use
sd_journal_get_data().
Use the
SD_JOURNAL_FOREACH_DATA() macro to
iterate through all fields of the current journal
entry:
.. code-block:: sh
int print_fields(sd_journal *j) {
const void *data;
size_t length;
SD_JOURNAL_FOREACH_DATA(j, data, length)
printf("%.*s\n", (int) length, data);
}
History
=======
sd_journal_get_data(),
sd_journal_enumerate_data(),
sd_journal_restart_data(), and
SD_JOURNAL_FOREACH_DATA() were added in version 187.
sd_journal_set_data_threshold() and
sd_journal_get_data_threshold() were added in version 196.
sd_journal_enumerate_available_data() was added in version 246.
See Also
========
:ref:`systemd(1)`, :ref:`systemd.journal-fields(7)`, :ref:`sd-journal(3)`, :ref:`sd_journal_open(3)`, :ref:`sd_journal_next(3)`, :ref:`sd_journal_get_realtime_usec(3)`, :ref:`sd_journal_query_unique(3)`

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,576 @@
.. SPDX-License-Identifier: LGPL-2.1-or-later:
:title: os-release
:manvolnum: 5
.. _os-release(5):
=============
os-release(5)
=============
.. only:: html
os-release — initrd-release — extension-release — Operating system identification
#################################################################################
Synopsis
########
``/etc/os-release``
``/usr/lib/os-release``
``/etc/initrd-release``
``/usr/lib/extension-release.d/extension-release.<IMAGE>``
Description
===========
The ``/etc/os-release`` and
``/usr/lib/os-release`` files contain operating
system identification data.
The format of ``os-release`` is a newline-separated list of
environment-like shell-compatible variable assignments. It is possible to source the configuration from
Bourne shell scripts, however, beyond mere variable assignments, no shell features are supported (this
means variable expansion is explicitly not supported), allowing applications to read the file without
implementing a shell compatible execution engine. Variable assignment values must be enclosed in double
or single quotes if they include spaces, semicolons or other special characters outside of AZ, az,
09. (Assignments that do not include these special characters may be enclosed in quotes too, but this is
optional.) Shell special characters ("$", quotes, backslash, backtick) must be escaped with backslashes,
following shell style. All strings should be in UTF-8 encoding, and non-printable characters should not
be used. Concatenation of multiple individually quoted strings is not supported. Lines beginning with "#"
are treated as comments. Blank lines are permitted and ignored.
The file ``/etc/os-release`` takes
precedence over ``/usr/lib/os-release``.
Applications should check for the former, and exclusively use its
data if it exists, and only fall back to
``/usr/lib/os-release`` if it is missing.
Applications should not read data from both files at the same
time. ``/usr/lib/os-release`` is the recommended
place to store OS release information as part of vendor trees.
``/etc/os-release`` should be a relative symlink
to ``/usr/lib/os-release``, to provide
compatibility with applications only looking at
``/etc/``. A relative symlink instead of an
absolute symlink is necessary to avoid breaking the link in a
chroot or initrd environment.
``os-release`` contains data that is
defined by the operating system vendor and should generally not be
changed by the administrator.
As this file only encodes names and identifiers it should
not be localized.
The ``/etc/os-release`` and
``/usr/lib/os-release`` files might be symlinks
to other files, but it is important that the file is available
from earliest boot on, and hence must be located on the root file
system.
``os-release`` must not contain repeating keys. Nevertheless, readers should pick
the entries later in the file in case of repeats, similarly to how a shell sourcing the file would. A
reader may warn about repeating entries.
For a longer rationale for ``os-release``
please refer to the `Announcement of ``/etc/os-release`` <https://0pointer.de/blog/projects/os-release>`_.
``/etc/initrd-release``
-----------------------
In the `initrd <https://docs.kernel.org/admin-guide/initrd.html>`_,
``/etc/initrd-release`` plays the same role as ``os-release`` in the
main system. Additionally, the presence of that file means that the system is in the initrd phase.
``/etc/os-release`` should be symlinked to ``/etc/initrd-release``
(or vice versa), so programs that only look for ``/etc/os-release`` (as described
above) work correctly.
The rest of this document that talks about ``os-release`` should be understood
to apply to ``initrd-release`` too.
``/usr/lib/extension-release.d/extension-release.<IMAGE>``
----------------------------------------------------------
``/usr/lib/extension-release.d/extension-release.<IMAGE>``
plays the same role for extension images as ``os-release`` for the main system, and
follows the syntax and rules as described in the `Portable Services <https://systemd.io/PORTABLE_SERVICES>`_ page. The purpose of this
file is to identify the extension and to allow the operating system to verify that the extension image
matches the base OS. This is typically implemented by checking that the ``ID=`` options
match, and either ``SYSEXT_LEVEL=`` exists and matches too, or if it is not present,
``VERSION_ID=`` exists and matches. This ensures ABI/API compatibility between the
layers and prevents merging of an incompatible image in an overlay.
In order to identify the extension image itself, the same fields defined below can be added to the
``extension-release`` file with a ``SYSEXT_`` prefix (to disambiguate
from fields used to match on the base image). E.g.: ``SYSEXT_ID=myext``,
``SYSEXT_VERSION_ID=1.2.3``.
In the ``extension-release.<IMAGE>`` filename, the
<IMAGE> part must exactly match the file name of the containing image with the
suffix removed. In case it is not possible to guarantee that an image file name is stable and doesn't
change between the build and the deployment phases, it is possible to relax this check: if exactly one
file whose name matches "``extension-release.*``" is present in this
directory, and the file is tagged with a ``user.extension-release.strict``
:man-pages:`xattr(7)` set to the
string "0", it will be used instead.
The rest of this document that talks about ``os-release`` should be understood
to apply to ``extension-release`` too.
Options
=======
The following OS identifications parameters may be set using
``os-release``:
General information identifying the operating system
----------------------------------------------------
.. option:: NAME=
A string identifying the operating system, without a version component, and
suitable for presentation to the user. If not set, a default of "NAME=Linux" may
be used.
Examples: "NAME=Fedora", "NAME="Debian GNU/Linux"".
.. option:: ID=
A lower-case string (no spaces or other characters outside of 09, az, ".", "_"
and "-") identifying the operating system, excluding any version information and suitable for
processing by scripts or usage in generated filenames. If not set, a default of
"ID=linux" may be used. Note that even though this string may not include
characters that require shell quoting, quoting may nevertheless be used.
Examples: "ID=fedora", "ID=debian".
.. option:: ID_LIKE=
A space-separated list of operating system identifiers in the same syntax as the
:directive:environment-variables:var:`ID=` setting. It should list identifiers of operating systems that are closely
related to the local operating system in regards to packaging and programming interfaces, for
example listing one or more OS identifiers the local OS is a derivative from. An OS should
generally only list other OS identifiers it itself is a derivative of, and not any OSes that are
derived from it, though symmetric relationships are possible. Build scripts and similar should
check this variable if they need to identify the local operating system and the value of
:directive:environment-variables:var:`ID=` is not recognized. Operating systems should be listed in order of how
closely the local operating system relates to the listed ones, starting with the closest. This
field is optional.
Examples: for an operating system with "ID=centos", an assignment of
"ID_LIKE="rhel fedora"" would be appropriate. For an operating system with
"ID=ubuntu", an assignment of "ID_LIKE=debian" is appropriate.
.. option:: PRETTY_NAME=
A pretty operating system name in a format suitable for presentation to the
user. May or may not contain a release code name or OS version of some kind, as suitable. If not
set, a default of "PRETTY_NAME="Linux"" may be used
Example: "PRETTY_NAME="Fedora 17 (Beefy Miracle)"".
.. option:: CPE_NAME=
A CPE name for the operating system, in URI binding syntax, following the `Common Platform Enumeration Specification <http://scap.nist.gov/specifications/cpe/>`_ as
proposed by the NIST. This field is optional.
Example: "CPE_NAME="cpe:/o:fedoraproject:fedora:17""
.. option:: VARIANT=
A string identifying a specific variant or edition of the operating system suitable
for presentation to the user. This field may be used to inform the user that the configuration of
this system is subject to a specific divergent set of rules or default configuration settings. This
field is optional and may not be implemented on all systems.
Examples: "VARIANT="Server Edition"", "VARIANT="Smart Refrigerator
Edition"".
Note: this field is for display purposes only. The :directive:environment-variables:var:`VARIANT_ID` field should
be used for making programmatic decisions.
.. only:: html
.. versionadded:: 220
.. option:: VARIANT_ID=
A lower-case string (no spaces or other characters outside of 09, az, ".", "_" and
"-"), identifying a specific variant or edition of the operating system. This may be interpreted by
other packages in order to determine a divergent default configuration. This field is optional and
may not be implemented on all systems.
Examples: "VARIANT_ID=server", "VARIANT_ID=embedded".
.. only:: html
.. versionadded:: 220
Information about the version of the operating system
-----------------------------------------------------
.. option:: VERSION=
A string identifying the operating system version, excluding any OS name
information, possibly including a release code name, and suitable for presentation to the
user. This field is optional.
Examples: "VERSION=17", "VERSION="17 (Beefy Miracle)"".
.. option:: VERSION_ID=
A lower-case string (mostly numeric, no spaces or other characters outside of 09,
az, ".", "_" and "-") identifying the operating system version, excluding any OS name information
or release code name, and suitable for processing by scripts or usage in generated filenames. This
field is optional.
Examples: "VERSION_ID=17", "VERSION_ID=11.04".
.. option:: VERSION_CODENAME=
A lower-case string (no spaces or other characters outside of 09, az, ".", "_"
and "-") identifying the operating system release code name, excluding any OS name information or
release version, and suitable for processing by scripts or usage in generated filenames. This field
is optional and may not be implemented on all systems.
Examples: "VERSION_CODENAME=buster",
"VERSION_CODENAME=xenial".
.. only:: html
.. versionadded:: 231
.. option:: BUILD_ID=
A string uniquely identifying the system image originally used as the installation
base. In most cases, :directive:environment-variables:var:`VERSION_ID` or
:directive:environment-variables:var:`IMAGE_ID`+:directive:environment-variables:var:`IMAGE_VERSION` are updated when the entire system
image is replaced during an update. :directive:environment-variables:var:`BUILD_ID` may be used in distributions where
the original installation image version is important: :directive:environment-variables:var:`VERSION_ID` would change
during incremental system updates, but :directive:environment-variables:var:`BUILD_ID` would not. This field is
optional.
Examples: "BUILD_ID="2013-03-20.3"", "BUILD_ID=201303203".
.. only:: html
.. versionadded:: 200
.. option:: IMAGE_ID=
A lower-case string (no spaces or other characters outside of 09, az, ".", "_"
and "-"), identifying a specific image of the operating system. This is supposed to be used for
environments where OS images are prepared, built, shipped and updated as comprehensive, consistent
OS images. This field is optional and may not be implemented on all systems, in particularly not on
those that are not managed via images but put together and updated from individual packages and on
the local system.
Examples: "IMAGE_ID=vendorx-cashier-system",
"IMAGE_ID=netbook-image".
.. only:: html
.. versionadded:: 249
.. option:: IMAGE_VERSION=
A lower-case string (mostly numeric, no spaces or other characters outside of 09,
az, ".", "_" and "-") identifying the OS image version. This is supposed to be used together with
:directive:environment-variables:var:`IMAGE_ID` described above, to discern different versions of the same image.
Examples: "IMAGE_VERSION=33", "IMAGE_VERSION=47.1rc1".
.. only:: html
.. versionadded:: 249
To summarize: if the image updates are built and shipped as comprehensive units,
``IMAGE_ID``+``IMAGE_VERSION`` is the best fit. Otherwise, if updates
eventually completely replace previously installed contents, as in a typical binary distribution,
``VERSION_ID`` should be used to identify major releases of the operating system.
``BUILD_ID`` may be used instead or in addition to ``VERSION_ID`` when
the original system image version is important.
Presentation information and links
----------------------------------
.. option:: HOME_URL=, DOCUMENTATION_URL=, SUPPORT_URL=, BUG_REPORT_URL=, PRIVACY_POLICY_URL=
Links to resources on the Internet related to the operating system.
:directive:environment-variables:var:`HOME_URL=` should refer to the homepage of the operating system, or alternatively
some homepage of the specific version of the operating system.
:directive:environment-variables:var:`DOCUMENTATION_URL=` should refer to the main documentation page for this
operating system. :directive:environment-variables:var:`SUPPORT_URL=` should refer to the main support page for the
operating system, if there is any. This is primarily intended for operating systems which vendors
provide support for. :directive:environment-variables:var:`BUG_REPORT_URL=` should refer to the main bug reporting page
for the operating system, if there is any. This is primarily intended for operating systems that
rely on community QA. :directive:environment-variables:var:`PRIVACY_POLICY_URL=` should refer to the main privacy
policy page for the operating system, if there is any. These settings are optional, and providing
only some of these settings is common. These URLs are intended to be exposed in "About this system"
UIs behind links with captions such as "About this Operating System", "Obtain Support", "Report a
Bug", or "Privacy Policy". The values should be in `RFC3986 format <https://tools.ietf.org/html/rfc3986>`_, and should be
"http:" or "https:" URLs, and possibly "mailto:"
or "tel:". Only one URL shall be listed in each setting. If multiple resources
need to be referenced, it is recommended to provide an online landing page linking all available
resources.
Examples: "HOME_URL="https://fedoraproject.org/"",
"BUG_REPORT_URL="https://bugzilla.redhat.com/"".
.. option:: SUPPORT_END=
The date at which support for this version of the OS ends. (What exactly "lack of
support" means varies between vendors, but generally users should assume that updates, including
security fixes, will not be provided.) The value is a date in the ISO 8601 format
"YYYY-MM-DD", and specifies the first day on which support *is
not* provided.
For example, "SUPPORT_END=2001-01-01" means that the system was supported
until the end of the last day of the previous millennium.
.. only:: html
.. versionadded:: 252
.. option:: LOGO=
A string, specifying the name of an icon as defined by `freedesktop.org Icon Theme
Specification <https://standards.freedesktop.org/icon-theme-spec/latest>`_. This can be used by graphical applications to display an operating system's
or distributor's logo. This field is optional and may not necessarily be implemented on all
systems.
Examples: "LOGO=fedora-logo", "LOGO=distributor-logo-opensuse"
.. only:: html
.. versionadded:: 240
.. option:: ANSI_COLOR=
A suggested presentation color when showing the OS name on the console. This should
be specified as string suitable for inclusion in the ESC [ m ANSI/ECMA-48 escape code for setting
graphical rendition. This field is optional.
Examples: "ANSI_COLOR="0;31"" for red, "ANSI_COLOR="1;34""
for light blue, or "ANSI_COLOR="0;38;2;60;110;180"" for Fedora blue.
.. option:: VENDOR_NAME=
The name of the OS vendor. This is the name of the organization or company which
produces the OS. This field is optional.
This name is intended to be exposed in "About this system" UIs or software update UIs when
needed to distinguish the OS vendor from the OS itself. It is intended to be human readable.
Examples: "VENDOR_NAME="Fedora Project"" for Fedora Linux,
"VENDOR_NAME="Canonical"" for Ubuntu.
.. only:: html
.. versionadded:: 254
.. option:: VENDOR_URL=
The homepage of the OS vendor. This field is optional. The
:directive:environment-variables:var:`VENDOR_NAME=` field should be set if this one is, although clients must be
robust against either field not being set.
The value should be in `RFC3986 format <https://tools.ietf.org/html/rfc3986>`_, and should be
"http:" or "https:" URLs. Only one URL shall be listed in the
setting.
Examples: "VENDOR_URL="https://fedoraproject.org/"",
"VENDOR_URL="https://canonical.com/"".
.. only:: html
.. versionadded:: 254
Distribution-level defaults and metadata
----------------------------------------
.. option:: DEFAULT_HOSTNAME=
A string specifying the hostname if
:ref:`hostname(5)` is not
present and no other configuration source specifies the hostname. Must be either a single DNS label
(a string composed of 7-bit ASCII lower-case characters and no spaces or dots, limited to the
format allowed for DNS domain name labels), or a sequence of such labels separated by single dots
that forms a valid DNS FQDN. The hostname must be at most 64 characters, which is a Linux
limitation (DNS allows longer names).
See :ref:`org.freedesktop.hostname1(5)`
for a description of how
:ref:`systemd-hostnamed.service(8)`
determines the fallback hostname.
.. only:: html
.. versionadded:: 248
.. option:: ARCHITECTURE=
A string that specifies which CPU architecture the userspace binaries require.
The architecture identifiers are the same as for :directive:environment-variables:var:`ConditionArchitecture=`
described in :ref:`systemd.unit(5)`.
The field is optional and should only be used when just single architecture is supported.
It may provide redundant information when used in a GPT partition with a GUID type that already
encodes the architecture. If this is not the case, the architecture should be specified in
e.g., an extension image, to prevent an incompatible host from loading it.
.. only:: html
.. versionadded:: 252
.. option:: SYSEXT_LEVEL=
A lower-case string (mostly numeric, no spaces or other characters outside of 09,
az, ".", "_" and "-") identifying the operating system extensions support level, to indicate which
extension images are supported. See ``/usr/lib/extension-release.d/extension-release.<IMAGE>``,
`initrd <https://docs.kernel.org/admin-guide/initrd.html>`_ and
:ref:`systemd-sysext(8)`)
for more information.
Examples: "SYSEXT_LEVEL=2", "SYSEXT_LEVEL=15.14".
.. only:: html
.. versionadded:: 248
.. option:: CONFEXT_LEVEL=
Semantically the same as :directive:environment-variables:var:`SYSEXT_LEVEL=` but for confext images.
See ``/etc/extension-release.d/extension-release.<IMAGE>``
for more information.
Examples: "CONFEXT_LEVEL=2", "CONFEXT_LEVEL=15.14".
.. only:: html
.. versionadded:: 254
.. option:: SYSEXT_SCOPE=
Takes a space-separated list of one or more of the strings
"system", "initrd" and "portable". This field is
only supported in ``extension-release.d/`` files and indicates what environments
the system extension is applicable to: i.e. to regular systems, to initrds, or to portable service
images. If unspecified, "SYSEXT_SCOPE=system portable" is implied, i.e. any system
extension without this field is applicable to regular systems and to portable service environments,
but not to initrd environments.
.. only:: html
.. versionadded:: 250
.. option:: CONFEXT_SCOPE=
Semantically the same as :directive:environment-variables:var:`SYSEXT_SCOPE=` but for confext images.
.. only:: html
.. versionadded:: 254
.. option:: PORTABLE_PREFIXES=
Takes a space-separated list of one or more valid prefix match strings for the
`Portable Services <https://systemd.io/PORTABLE_SERVICES>`_ logic.
This field serves two purposes: it is informational, identifying portable service images as such
(and thus allowing them to be distinguished from other OS images, such as bootable system images).
It is also used when a portable service image is attached: the specified or implied portable
service prefix is checked against the list specified here, to enforce restrictions how images may
be attached to a system.
.. only:: html
.. versionadded:: 250
Notes
-----
If you are using this file to determine the OS or a specific version of it, use the
``ID`` and ``VERSION_ID`` fields, possibly with
``ID_LIKE`` as fallback for ``ID``. When looking for an OS identification
string for presentation to the user use the ``PRETTY_NAME`` field.
Note that operating system vendors may choose not to provide version information, for example to
accommodate for rolling releases. In this case, ``VERSION`` and
``VERSION_ID`` may be unset. Applications should not rely on these fields to be
set.
Operating system vendors may extend the file format and introduce new fields. It is highly
recommended to prefix new fields with an OS specific name in order to avoid name clashes. Applications
reading this file must ignore unknown fields.
Example: "DEBIAN_BTS="debbugs://bugs.debian.org/"".
Container and sandbox runtime managers may make the host's identification data available to
applications by providing the host's ``/etc/os-release`` (if available, otherwise
``/usr/lib/os-release`` as a fallback) as
``/run/host/os-release``.
Examples
========
``os-release`` file for Fedora Workstation
------------------------------------------
.. code-block:: sh
NAME=Fedora
VERSION="32 (Workstation Edition)"
ID=fedora
VERSION_ID=32
PRETTY_NAME="Fedora 32 (Workstation Edition)"
ANSI_COLOR="0;38;2;60;110;180"
LOGO=fedora-logo-icon
CPE_NAME="cpe:/o:fedoraproject:fedora:32"
HOME_URL="https://fedoraproject.org/"
DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f32/system-administrators-guide/"
SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=32
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=32
PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy"
VARIANT="Workstation Edition"
VARIANT_ID=workstation
``extension-release`` file for an extension for Fedora Workstation 32
---------------------------------------------------------------------
.. code-block:: sh
ID=fedora
VERSION_ID=32
Reading ``os-release`` in :man-pages:`sh(1)`
--------------------------------------------
.. literalinclude:: /code-examples/sh/check-os-release.sh
:language: shell
Reading ``os-release`` in :die-net:`python(1)` (versions >= 3.10)
-----------------------------------------------------------------
.. literalinclude:: /code-examples/py/check-os-release-simple.py
:language: python
See docs for `platform.freedesktop_os_release <https://docs.python.org/3/library/platform.html#platform.freedesktop_os_release>`_ for more details.
Reading ``os-release`` in :die-net:`python(1)` (any version)
------------------------------------------------------------
.. literalinclude:: /code-examples/py/check-os-release.py
:language: python
Note that the above version that uses the built-in implementation is preferred
in most cases, and the open-coded version here is provided for reference.
See Also
========
:ref:`systemd(1)`, :die-net:`lsb_release(1)`, :ref:`hostname(5)`, :ref:`machine-id(5)`, :ref:`machine-info(5)`

View File

@ -0,0 +1,862 @@
.. SPDX-License-Identifier: LGPL-2.1-or-later:
:title: repart.d
:manvolnum: 5
.. _repart.d(5):
===========
repart.d(5)
===========
.. only:: html
repart.d — Partition Definition Files for Automatic Boot-Time Repartitioning
############################################################################
Synopsis
########
``/etc/repart.d/\*.conf``
``/run/repart.d/\*.conf``
``/usr/local/lib/repart.d/\*.conf``
``/usr/lib/repart.d/\*.conf``
Description
===========
``repart.d/\*.conf`` files describe basic properties of partitions of block
devices of the local system. They may be used to declare types, names and sizes of partitions that shall
exist. The
:ref:`systemd-repart(8)`
service reads these files and attempts to add new partitions currently missing and enlarge existing
partitions according to these definitions. Operation is generally incremental, i.e. when applied, what
exists already is left intact, and partitions are never shrunk, moved or deleted.
These definition files are useful for implementing operating system images that are prepared and
delivered with minimally sized images (for example lacking any state or swap partitions), and which on
first boot automatically take possession of any remaining disk space following a few basic rules.
Currently, support for partition definition files is only implemented for GPT partition
tables.
Partition files are generally matched against any partitions already existing on disk in a simple
algorithm: the partition files are sorted by their filename (ignoring the directory prefix), and then
compared in order against existing partitions matching the same partition type UUID. Specifically, the
first existing partition with a specific partition type UUID is assigned the first definition file with
the same partition type UUID, and the second existing partition with a specific type UUID the second
partition file with the same type UUID, and so on. Any left-over partition files that have no matching
existing partition are assumed to define new partition that shall be created. Such partitions are
appended to the end of the partition table, in the order defined by their names utilizing the first
partition slot greater than the highest slot number currently in use. Any existing partitions that have
no matching partition file are left as they are.
Note that these definitions may only be used to create and initialize new partitions or to grow
existing ones. In the latter case it will not grow the contained files systems however; separate
mechanisms, such as
:ref:`systemd-growfs(8)` may be
used to grow the file systems inside of these partitions. Partitions may also be marked for automatic
growing via the ``GrowFileSystem=`` setting, in which case the file system is grown on
first mount by tools that respect this flag. See below for details.
[Partition] Section Options
===========================
``Type=``
---------
The GPT partition type UUID to match. This may be a GPT partition type UUID such as
``4f68bce3-e8cd-4db1-96e7-fbcaf984b709``, or an identifier.
Architecture specific partition types can use one of these architecture identifiers:
``alpha``, ``arc``, ``arm`` (32-bit),
``arm64`` (64-bit, aka aarch64), ``ia64``,
``loongarch64``, ``mips-le``, ``mips64-le``,
``parisc``, ``ppc``, ``ppc64``,
``ppc64-le``, ``riscv32``, ``riscv64``,
``s390``, ``s390x``, ``tilegx``,
``x86`` (32-bit, aka i386) and ``x86-64`` (64-bit, aka amd64).
The supported identifiers are:
.. list-table:: GPT partition type identifiers
:header-rows: 1
* - Identifier
- Explanation
* - ``esp``
- EFI System Partition
* - ``xbootldr``
- Extended Boot Loader Partition
* - ``swap``
- Swap partition
* - ``home``
- Home (``/home/``) partition
* - ``srv``
- Server data (``/srv/``) partition
* - ``var``
- Variable data (``/var/``) partition
* - ``tmp``
- Temporary data (``/var/tmp/``) partition
* - ``linux-generic``
- Generic Linux file system partition
* - ``root``
- Root file system partition type appropriate for the local architecture (an alias for an architecture root file system partition type listed below, e.g. ``root-x86-64``)
* - ``root-verity``
- Verity data for the root file system partition for the local architecture
* - ``root-verity-sig``
- Verity signature data for the root file system partition for the local architecture
* - ``root-secondary``
- Root file system partition of the secondary architecture of the local architecture (usually the matching 32-bit architecture for the local 64-bit architecture)
* - ``root-secondary-verity``
- Verity data for the root file system partition of the secondary architecture
* - ``root-secondary-verity-sig``
- Verity signature data for the root file system partition of the secondary architecture
* - ``root-{arch}``
- Root file system partition of the given architecture (such as ``root-x86-64`` or ``root-riscv64``)
* - ``root-{arch}-verity``
- Verity data for the root file system partition of the given architecture
* - ``root-{arch}-verity-sig``
- Verity signature data for the root file system partition of the given architecture
* - ``usr``
- ``/usr/`` file system partition type appropriate for the local architecture (an alias for an architecture ``/usr/`` file system partition type listed below, e.g. ``usr-x86-64``)
* - ``usr-verity``
- Verity data for the ``/usr/`` file system partition for the local architecture
* - ``usr-verity-sig``
- Verity signature data for the ``/usr/`` file system partition for the local architecture
* - ``usr-secondary``
- ``/usr/`` file system partition of the secondary architecture of the local architecture (usually the matching 32-bit architecture for the local 64-bit architecture)
* - ``usr-secondary-verity``
- Verity data for the ``/usr/`` file system partition of the secondary architecture
* - ``usr-secondary-verity-sig``
- Verity signature data for the ``/usr/`` file system partition of the secondary architecture
* - ``usr-{arch}``
- ``/usr/`` file system partition of the given architecture
* - ``usr-{arch}-verity``
- Verity data for the ``/usr/`` file system partition of the given architecture
* - ``usr-{arch}-verity-sig``
- Verity signature data for the ``/usr/`` file system partition of the given architecture
This setting defaults to ``linux-generic``.
Most of the partition type UUIDs listed above are defined in the `Discoverable Partitions
Specification <https://uapi-group.org/specifications/specs/discoverable_partitions_specification>`_.
.. only:: html
.. versionadded:: 245
``Label=``
----------
The textual label to assign to the partition if none is assigned yet. Note that this
setting is not used for matching. It is also not used when a label is already set for an existing
partition. It is thus only used when a partition is newly created or when an existing one had a no
label set (that is: an empty label). If not specified a label derived from the partition type is
automatically used. Simple specifier expansion is supported, see below.
.. only:: html
.. versionadded:: 245
``UUID=``
---------
The UUID to assign to the partition if none is assigned yet. Note that this
setting is not used for matching. It is also not used when a UUID is already set for an existing
partition. It is thus only used when a partition is newly created or when an existing one had a
all-zero UUID set. If set to "null", the UUID is set to all zeroes. If not specified
a UUID derived from the partition type is automatically used.
.. only:: html
.. versionadded:: 246
``Priority=``
-------------
A numeric priority to assign to this partition, in the range -2147483648…2147483647,
with smaller values indicating higher priority, and higher values indicating smaller priority. This
priority is used in case the configured size constraints on the defined partitions do not permit
fitting all partitions onto the available disk space. If the partitions do not fit, the highest
numeric partition priority of all defined partitions is determined, and all defined partitions with
this priority are removed from the list of new partitions to create (which may be multiple, if the
same priority is used for multiple partitions). The fitting algorithm is then tried again. If the
partitions still do not fit, the now highest numeric partition priority is determined, and the
matching partitions removed too, and so on. Partitions of a priority of 0 or lower are never
removed. If all partitions with a priority above 0 are removed and the partitions still do not fit on
the device the operation fails. Note that this priority has no effect on ordering partitions, for
that use the alphabetical order of the filenames of the partition definition files. Defaults to
0.
.. only:: html
.. versionadded:: 245
``Weight=``
-----------
A numeric weight to assign to this partition in the range 0…1000000. Available disk
space is assigned the defined partitions according to their relative weights (subject to the size
constraints configured with ``SizeMinBytes=``, ``SizeMaxBytes=``), so
that a partition with weight 2000 gets double the space as one with weight 1000, and a partition with
weight 333 a third of that. Defaults to 1000.
The ``Weight=`` setting is used to distribute available disk space in an
"elastic" fashion, based on the disk size and existing partitions. If a partition shall have a fixed
size use both ``SizeMinBytes=`` and ``SizeMaxBytes=`` with the same
value in order to fixate the size to one value, in which case the weight has no
effect.
.. only:: html
.. versionadded:: 245
``PaddingWeight=``
------------------
Similar to ``Weight=``, but sets a weight for the free space after the
partition (the "padding"). When distributing available space the weights of all partitions and all
defined padding is summed, and then each partition and padding gets the fraction defined by its
weight. Defaults to 0, i.e. by default no padding is applied.
Padding is useful if empty space shall be left for later additions or a safety margin at the
end of the device or between partitions.
.. only:: html
.. versionadded:: 245
``SizeMinBytes=, SizeMaxBytes=``
--------------------------------
Specifies minimum and maximum size constraints in bytes. Takes the usual K, M, G, T,
… suffixes (to the base of 1024). If ``SizeMinBytes=`` is specified the partition is
created at or grown to at least the specified size. If ``SizeMaxBytes=`` is specified
the partition is created at or grown to at most the specified size. The precise size is determined
through the weight value configured with ``Weight=``, see above. When
``SizeMinBytes=`` is set equal to ``SizeMaxBytes=`` the configured
weight has no effect as the partition is explicitly sized to the specified fixed value. Note that
partitions are never created smaller than 4096 bytes, and since partitions are never shrunk the
previous size of the partition (in case the partition already exists) is also enforced as lower bound
for the new size. The values should be specified as multiples of 4096 bytes, and are rounded upwards
(in case of ``SizeMinBytes=``) or downwards (in case of
``SizeMaxBytes=``) otherwise. If the backing device does not provide enough space to
fulfill the constraints placing the partition will fail. For partitions that shall be created,
depending on the setting of ``Priority=`` (see above) the partition might be dropped
and the placing algorithm restarted. By default a minimum size constraint of 10M and no maximum size
constraint is set.
.. only:: html
.. versionadded:: 245
``PaddingMinBytes=, PaddingMaxBytes=``
--------------------------------------
Specifies minimum and maximum size constraints in bytes for the free space after the
partition (the "padding"). Semantics are similar to ``SizeMinBytes=`` and
``SizeMaxBytes=``, except that unlike partition sizes free space can be shrunk and can
be as small as zero. By default no size constraints on padding are set, so that only
``PaddingWeight=`` determines the size of the padding applied.
.. only:: html
.. versionadded:: 245
``CopyBlocks=``
---------------
Takes a path to a regular file, block device node, char device node or directory, or
the special value "auto". If specified and the partition is newly created, the data
from the specified path is written to the newly created partition, on the block level. If a directory
is specified, the backing block device of the file system the directory is on is determined, and the
data read directly from that. This option is useful to efficiently replicate existing file systems
onto new partitions on the block level — for example to build a simple OS installer or an OS image
builder. Specify ``/dev/urandom`` as value to initialize a partition with random
data.
If the special value "auto" is specified, the source to copy from is
automatically picked up from the running system (or the image specified with
``--image=`` — if used). A partition that matches both the configured partition type (as
declared with ``Type=`` described above), and the currently mounted directory
appropriate for that partition type is determined. For example, if the partition type is set to
"root" the partition backing the root directory (``/``) is used as
source to copy from — if its partition type is set to "root" as well. If the
declared type is "usr" the partition backing ``/usr/`` is used as
source to copy blocks from — if its partition type is set to "usr" too. The logic is
capable of automatically tracking down the backing partitions for encrypted and Verity-enabled
volumes. "CopyBlocks=auto" is useful for implementing "self-replicating" systems,
i.e. systems that are their own installer.
The file specified here must have a size that is a multiple of the basic block size 512 and not
be empty. If this option is used, the size allocation algorithm is slightly altered: the partition is
created at least as big as required to fit the data in, i.e. the data size is an additional minimum
size value taken into consideration for the allocation algorithm, similar to and in addition to the
``SizeMin=`` value configured above.
This option has no effect if the partition it is declared for already exists, i.e. existing
data is never overwritten. Note that the data is copied in before the partition table is updated,
i.e. before the partition actually is persistently created. This provides robustness: it is
guaranteed that the partition either doesn't exist or exists fully populated; it is not possible that
the partition exists but is not or only partially populated.
This option cannot be combined with ``Format=`` or
``CopyFiles=``.
.. only:: html
.. versionadded:: 246
``Format=``
-----------
Takes a file system name, such as "ext4", "btrfs",
"xfs", "vfat", "erofs",
"squashfs" or the special value "swap". If specified and the partition
is newly created it is formatted with the specified file system (or as swap device). The file system
UUID and label are automatically derived from the partition UUID and label. If this option is used,
the size allocation algorithm is slightly altered: the partition is created at least as big as
required for the minimal file system of the specified type (or 4KiB if the minimal size is not
known).
This option has no effect if the partition already exists.
Similarly to the behaviour of ``CopyBlocks=``, the file system is formatted
before the partition is created, ensuring that the partition only ever exists with a fully
initialized file system.
This option cannot be combined with ``CopyBlocks=``.
.. only:: html
.. versionadded:: 247
``CopyFiles=``
--------------
Takes a pair of colon separated absolute file system paths. The first path refers to
a source file or directory on the host, the second path refers to a target in the file system of the
newly created partition and formatted file system. This setting may be used to copy files or
directories from the host into the file system that is created due to the ``Format=``
option. If ``CopyFiles=`` is used without ``Format=`` specified
explicitly, "Format=" with a suitable default is implied (currently
"vfat" for "ESP" and "XBOOTLDR" partitions, and
"ext4" otherwise, but this may change in the future). This option may be used
multiple times to copy multiple files or directories from host into the newly formatted file system.
The colon and second path may be omitted in which case the source path is also used as the target
path (relative to the root of the newly created file system). If the source path refers to a
directory it is copied recursively.
This option has no effect if the partition already exists: it cannot be used to copy additional
files into an existing partition, it may only be used to populate a file system created anew.
The copy operation is executed before the file system is registered in the partition table,
thus ensuring that a file system populated this way only ever exists fully initialized.
Note that ``CopyFiles=`` will skip copying files that aren't supported by the
target filesystem (e.g symlinks, fifos, sockets and devices on vfat). When an unsupported file type
is encountered, ``systemd-repart`` will skip copying this file and write a log message
about it.
Note that ``systemd-repart`` does not change the UIDs/GIDs of any copied files
and directories. When running ``systemd-repart`` as an unprivileged user to build an
image of files and directories owned by the same user, you can run ``systemd-repart``
in a user namespace with the current user mapped to the root user to make sure the files and
directories in the image are owned by the root user.
Note that when populating XFS filesystems with ``systemd-repart`` and loop
devices are not available, populating XFS filesystems with files containing spaces, tabs or newlines
might fail on old versions of
:man-pages:`mkfs.xfs(8)`
due to limitations of its protofile format.
Note that when populating XFS filesystems with ``systemd-repart`` and loop
devices are not available, extended attributes will not be copied into generated XFS filesystems
due to limitations :man-pages:`mkfs.xfs(8)`'s
protofile format.
This option cannot be combined with ``CopyBlocks=``.
When
:ref:`systemd-repart(8)` is
invoked with the ``--copy-source=`` command line switch the file paths are taken
relative to the specified directory. If ``--copy-source=`` is not used, but the
``--image=`` or ``--root=`` switches are used, the source paths are taken
relative to the specified root directory or disk image root.
.. only:: html
.. versionadded:: 247
``ExcludeFiles=, ExcludeFilesTarget=``
--------------------------------------
Takes an absolute file system path referring to a source file or directory on the
host. This setting may be used to exclude files or directories from the host from being copied into
the file system when ``CopyFiles=`` is used. This option may be used multiple times to
exclude multiple files or directories from host from being copied into the newly formatted file
system.
If the path is a directory and ends with "/", only the directory's
contents are excluded but not the directory itself. If the path is a directory and does not end with
"/", both the directory and its contents are excluded.
``ExcludeFilesTarget=`` is like ``ExcludeFiles=`` except that
instead of excluding the path on the host from being copied into the partition, we exclude any files
and directories from being copied into the given path in the partition.
When
:ref:`systemd-repart(8)`
is invoked with the ``--image=`` or ``--root=`` command line switches the
paths specified are taken relative to the specified root directory or disk image root.
.. only:: html
.. versionadded:: 254
``MakeDirectories=``
--------------------
Takes one or more absolute paths, separated by whitespace, each declaring a directory
to create within the new file system. Behaviour is similar to ``CopyFiles=``, but
instead of copying in a set of files this just creates the specified directories with the default
mode of 0755 owned by the root user and group, plus all their parent directories (with the same
ownership and access mode). To configure directories with different ownership or access mode, use
``CopyFiles=`` and specify a source tree to copy containing appropriately
owned/configured directories. This option may be used more than once to create multiple
directories. When ``CopyFiles=`` and ``MakeDirectories=`` are used
together the former is applied first. If a directory listed already exists no operation is executed
(in particular, the ownership/access mode of the directories is left as is).
The primary use case for this option is to create a minimal set of directories that may be
mounted over by other partitions contained in the same disk image. For example, a disk image where
the root file system is formatted at first boot might want to automatically pre-create
``/usr/`` in it this way, so that the "usr" partition may
over-mount it.
Consider using
:ref:`systemd-tmpfiles(8)`
with its ``--image=`` option to pre-create other, more complex directory hierarchies (as
well as other inodes) with fine-grained control of ownership, access modes and other file
attributes.
.. only:: html
.. versionadded:: 249
``Subvolumes=``
---------------
Takes one or more absolute paths, separated by whitespace, each declaring a directory
that should be a subvolume within the new file system. This option may be used more than once to
specify multiple directories. Note that this setting does not create the directories themselves, that
can be configured with ``MakeDirectories=`` and ``CopyFiles=``.
Note that this option only takes effect if the target filesystem supports subvolumes, such as
"btrfs".
Note that due to limitations of "mkfs.btrfs", this option is only supported
when running with ``--offline=no``.
.. only:: html
.. versionadded:: 255
``DefaultSubvolume=``
---------------------
Takes an absolute path specifying the default subvolume within the new filesystem.
Note that this setting does not create the subvolume itself, that can be configured with
``Subvolumes=``.
Note that this option only takes effect if the target filesystem supports subvolumes, such as
"btrfs".
Note that due to limitations of "mkfs.btrfs", this option is only supported
when running with ``--offline=no``.
.. only:: html
.. versionadded:: 256
``Encrypt=``
------------
Takes one of "off", "key-file",
"tpm2" and "key-file+tpm2" (alternatively, also accepts a boolean
value, which is mapped to "off" when false, and "key-file" when
true). Defaults to "off". If not "off" the partition will be
formatted with a LUKS2 superblock, before the blocks configured with ``CopyBlocks=``
are copied in or the file system configured with ``Format=`` is created.
The LUKS2 UUID is automatically derived from the partition UUID in a stable fashion. If
"key-file" or "key-file+tpm2" is used, a key is added to the LUKS2
superblock, configurable with the ``--key-file=`` option to
``systemd-repart``. If "tpm2" or "key-file+tpm2" is
used, a key is added to the LUKS2 superblock that is enrolled to the local TPM2 chip, as configured
with the ``--tpm2-device=`` and ``--tpm2-pcrs=`` options to
``systemd-repart``.
When used this slightly alters the size allocation logic as the implicit, minimal size limits
of ``Format=`` and ``CopyBlocks=`` are increased by the space necessary
for the LUKS2 superblock (see above).
This option has no effect if the partition already exists.
.. only:: html
.. versionadded:: 247
``Verity=``
-----------
Takes one of "off", "data",
"hash" or "signature". Defaults to "off". If set
to "off" or "data", the partition is populated with content as
specified by ``CopyBlocks=`` or ``CopyFiles=``. If set to
"hash", the partition will be populated with verity hashes from the matching verity
data partition. If set to "signature", the partition will be populated with a JSON
object containing a signature of the verity root hash of the matching verity hash partition.
A matching verity partition is a partition with the same verity match key (as configured with
``VerityMatchKey=``).
If not explicitly configured, the data partition's UUID will be set to the first 128
bits of the verity root hash. Similarly, if not configured, the hash partition's UUID will be set to
the final 128 bits of the verity root hash. The verity root hash itself will be included in the
output of ``systemd-repart``.
This option has no effect if the partition already exists.
Usage of this option in combination with ``Encrypt=`` is not supported.
For each unique ``VerityMatchKey=`` value, a single verity data partition
("Verity=data") and a single verity hash partition ("Verity=hash")
must be defined.
.. only:: html
.. versionadded:: 252
``VerityMatchKey=``
-------------------
Takes a short, user-chosen identifier string. This setting is used to find sibling
verity partitions for the current verity partition. See the description for
``Verity=``.
.. only:: html
.. versionadded:: 252
``VerityDataBlockSizeBytes=``
-----------------------------
Configures the data block size of the generated verity hash partition. Must be between 512 and
4096 bytes and must be a power of 2. Defaults to the sector size if configured explicitly, or the underlying
block device sector size, or 4K if systemd-repart is not operating on a block device.
.. only:: html
.. versionadded:: 255
``VerityHashBlockSizeBytes=``
-----------------------------
Configures the hash block size of the generated verity hash partition. Must be between 512 and
4096 bytes and must be a power of 2. Defaults to the sector size if configured explicitly, or the underlying
block device sector size, or 4K if systemd-repart is not operating on a block device.
.. only:: html
.. versionadded:: 255
``FactoryReset=``
-----------------
Takes a boolean argument. If specified the partition is marked for removal during a
factory reset operation. This functionality is useful to implement schemes where images can be reset
into their original state by removing partitions and creating them anew. Defaults to off.
.. only:: html
.. versionadded:: 245
``Flags=``
----------
Configures the 64-bit GPT partition flags field to set for the partition when creating
it. This option has no effect if the partition already exists. If not specified the flags values is
set to all zeroes, except for the three bits that can also be configured via
``NoAuto=``, ``ReadOnly=`` and ``GrowFileSystem=``; see
below for details on the defaults for these three flags. Specify the flags value in hexadecimal (by
prefixing it with "0x"), binary (prefix "0b") or decimal (no
prefix).
.. only:: html
.. versionadded:: 249
``NoAuto=, ReadOnly=, GrowFileSystem=``
---------------------------------------
Configures the No-Auto, Read-Only and Grow-File-System partition flags (bit 63, 60
and 59) of the partition table entry, as defined by the `Discoverable Partitions Specification <https://uapi-group.org/specifications/specs/discoverable_partitions_specification>`_. Only
available for partition types supported by the specification. This option is a friendly way to set
bits 63, 60 and 59 of the partition flags value without setting any of the other bits, and may be set
via ``Flags=`` too, see above.
If ``Flags=`` is used in conjunction with one or more of
``NoAuto=``/``ReadOnly=``/``GrowFileSystem=`` the latter
control the value of the relevant flags, i.e. the high-level settings
``NoAuto=``/``ReadOnly=``/``GrowFileSystem=`` override
the relevant bits of the low-level setting ``Flags=``.
Note that the three flags affect only automatic partition mounting, as implemented by
:ref:`systemd-gpt-auto-generator(8)`
or the ``--image=`` option of various commands (such as
:ref:`systemd-nspawn(1)`). It
has no effect on explicit mounts, such as those done via :man-pages:`mount(8)` or
:man-pages:`fstab(5)`.
If both bit 50 and 59 are set for a partition (i.e. the partition is marked both read-only and
marked for file system growing) the latter is typically without effect: the read-only flag takes
precedence in most tools reading these flags, and since growing the file system involves writing to
the partition it is consequently ignored.
``NoAuto=`` defaults to off. ``ReadOnly=`` defaults to on for
Verity partition types, and off for all others. ``GrowFileSystem=`` defaults to on for
all partition types that support it, except if the partition is marked read-only (and thus
effectively, defaults to off for Verity partitions).
.. only:: html
.. versionadded:: 249
``SplitName=``
--------------
Configures the suffix to append to split artifacts when the ``--split``
option of
:ref:`systemd-repart(8)` is
used. Simple specifier expansion is supported, see below. Defaults to "%t". To
disable split artifact generation for a partition, set ``SplitName=`` to
"-".
.. only:: html
.. versionadded:: 252
``Minimize=``
-------------
Takes one of "off", "best", and
"guess" (alternatively, also accepts a boolean value, which is mapped to
"off" when false, and "best" when true). Defaults to
"off". If set to "best", the partition will have the minimal size
required to store the sources configured with ``CopyFiles=``. "best"
is currently only supported for read-only filesystems. If set to "guess", the
partition is created at least as big as required to store the sources configured with
``CopyFiles=``. Note that unless the filesystem is a read-only filesystem,
``systemd-repart`` will have to populate the filesystem twice to guess the minimal
required size, so enabling this option might slow down repart when populating large partitions.
.. only:: html
.. versionadded:: 253
``MountPoint=``
---------------
Specifies where and how the partition should be mounted. Takes at least one and at
most two fields separated with a colon (":"). The first field specifies where the
partition should be mounted. The second field specifies extra mount options to append to the default
mount options. These fields correspond to the second and fourth column of the
:man-pages:`fstab(5)`
format. This setting may be specified multiple times to mount the partition multiple times. This can
be used to add mounts for different btrfs subvolumes located on the same btrfs partition.
Note that this setting is only taken into account when ``--generate-fstab=`` is
specified on the ``systemd-repart`` command line.
.. only:: html
.. versionadded:: 256
``EncryptedVolume=``
--------------------
Specify how the encrypted partition should be set up. Takes at least one and at most
three fields separated with a colon (":"). The first field specifies the encrypted
volume name under ``/dev/mapper/``. If not specified, "luks-UUID"
will be used where "UUID" is the LUKS UUID. The second field specifies the keyfile
to use following the same format as specified in crypttab. The third field specifies a
comma-delimited list of crypttab options. These fields correspond to the first, third and fourth
column of the
:ref:`crypttab(5)` format.
Note that this setting is only taken into account when ``--generate-crypttab=``
is specified on the ``systemd-repart`` command line.
.. only:: html
.. versionadded:: 256
Specifiers
==========
Specifiers may be used in the ``Label=``, ``CopyBlocks=``,
``CopyFiles=``, ``MakeDirectories=``, ``SplitName=``
settings. The following expansions are understood:
.. list-table:: Specifiers available
:header-rows: 1
* - Specifier
- Meaning
- Details
Additionally, for the ``SplitName=`` setting, the following specifiers are also
understood:
.. list-table:: Specifiers available
:header-rows: 1
* - Specifier
- Meaning
- Details
* - "%T"
- Partition Type UUID
- The partition type UUID, as configured with ``Type=``
* - "%t"
- Partition Type Identifier
- The partition type identifier corresponding to the partition type UUID
* - "%U"
- Partition UUID
- The partition UUID, as configured with ``UUID=``
* - "%n"
- Partition Number
- The partition number assigned to the partition
Environment
===========
Extra filesystem formatting options can be provided using filesystem-specific environment variables:
``$SYSTEMD_REPART_MKFS_OPTIONS_BTRFS``, ``$SYSTEMD_REPART_MKFS_OPTIONS_XFS``,
``$SYSTEMD_REPART_MKFS_OPTIONS_VFAT``, ``$SYSTEMD_REPART_MKFS_OPTIONS_EROFS``,
and ``$SYSTEMD_REPART_MKFS_OPTIONS_SQUASHFS``. Each variable accepts valid
``mkfs.<filesystem>`` command-line arguments.
The content of those variables is passed as-is to the command, without any verification.
Examples
========
Grow the root partition to the full disk size at first boot
-----------------------------------------------------------
With the following file the root partition is automatically grown to the full disk if possible
during boot.
.. code-block:: sh
# /usr/lib/repart.d/50-root.conf
[Partition]
Type=root
Create a swap and home partition automatically on boot, if missing
------------------------------------------------------------------
The home partition gets all available disk space while the swap partition gets 1G at most and 64M
at least. We set a priority > 0 on the swap partition to ensure the swap partition is not used if not
enough space is available. For every three bytes assigned to the home partition the swap partition gets
assigned one.
.. code-block:: sh
# /usr/lib/repart.d/60-home.conf
[Partition]
Type=home
.. code-block:: sh
# /usr/lib/repart.d/70-swap.conf
[Partition]
Type=swap
SizeMinBytes=64M
SizeMaxBytes=1G
Priority=1
Weight=333
Create B partitions in an A/B Verity setup, if missing
------------------------------------------------------
Let's say the vendor intends to update OS images in an A/B setup, i.e. with two root partitions
(and two matching Verity partitions) that shall be used alternatingly during upgrades. To minimize
image sizes the original image is shipped only with one root and one Verity partition (the "A" set),
and the second root and Verity partitions (the "B" set) shall be created on first boot on the free
space on the medium.
.. code-block:: sh
# /usr/lib/repart.d/50-root.conf
[Partition]
Type=root
SizeMinBytes=512M
SizeMaxBytes=512M
.. code-block:: sh
# /usr/lib/repart.d/60-root-verity.conf
[Partition]
Type=root-verity
SizeMinBytes=64M
SizeMaxBytes=64M
The definitions above cover the "A" set of root partition (of a fixed 512M size) and Verity
partition for the root partition (of a fixed 64M size). Let's use symlinks to create the "B" set of
partitions, since after all they shall have the same properties and sizes as the "A" set.
.. code-block:: sh
# ln -s 50-root.conf /usr/lib/repart.d/70-root-b.conf
# ln -s 60-root-verity.conf /usr/lib/repart.d/80-root-verity-b.conf
Create a data partition and corresponding verity partitions from a OS tree
--------------------------------------------------------------------------
Assuming we have an OS tree at ``/var/tmp/os-tree`` that we want
to package in a root partition together with matching verity partitions, we can do so as follows:
.. code-block:: sh
# 50-root.conf
[Partition]
Type=root
CopyFiles=/var/tmp/os-tree
Verity=data
VerityMatchKey=root
Minimize=guess
.. code-block:: sh
# 60-root-verity.conf
[Partition]
Type=root-verity
Verity=hash
VerityMatchKey=root
# Explicitly set the hash and data block size to 4K
VerityDataBlockSizeBytes=4096
VerityHashBlockSizeBytes=4096
Minimize=best
.. code-block:: sh
# 70-root-verity-sig.conf
[Partition]
Type=root-verity-sig
Verity=signature
VerityMatchKey=root
See Also
========
:ref:`systemd(1)`, :ref:`systemd-repart(8)`, :man-pages:`sfdisk(8)`, :ref:`systemd-cryptenroll(1)`

View File

@ -0,0 +1,113 @@
.. SPDX-License-Identifier: LGPL-2.1-or-later:
:title: runlevel
:manvolnum: 8
.. _runlevel(8):
===========
runlevel(8)
===========
.. only:: html
runlevel — Print previous and current SysV runlevel
###################################################
Synopsis
########
``runlevel`` [options...]
Overview
========
"Runlevels" are an obsolete way to start and stop groups of
services used in SysV init. systemd provides a compatibility layer
that maps runlevels to targets, and associated binaries like
``runlevel``. Nevertheless, only one runlevel can
be "active" at a given time, while systemd can activate multiple
targets concurrently, so the mapping to runlevels is confusing
and only approximate. Runlevels should not be used in new code,
and are mostly useful as a shorthand way to refer the matching
systemd targets in kernel boot parameters.
.. list-table:: Mapping between runlevels and systemd targets
:header-rows: 1
* - Runlevel
- Target
* - 0
- ``poweroff.target``
* - 1
- ``rescue.target``
* - 2, 3, 4
- ``multi-user.target``
* - 5
- ``graphical.target``
* - 6
- ``reboot.target``
Description
===========
``runlevel`` prints the previous and current
SysV runlevel if they are known.
The two runlevel characters are separated by a single space
character. If a runlevel cannot be determined, N is printed
instead. If neither can be determined, the word "unknown" is
printed.
Unless overridden in the environment, this will check the
utmp database for recent runlevel changes.
Options
=======
The following option is understood:
``--help``
----------
Exit status
===========
If one or both runlevels could be determined, 0 is returned,
a non-zero failure code otherwise.
Environment
===========
``$RUNLEVEL``
-------------
If :directive:environment-variables:var:`$RUNLEVEL` is set,
``runlevel`` will print this value as current
runlevel and ignore utmp.
``$PREVLEVEL``
--------------
If :directive:environment-variables:var:`$PREVLEVEL` is set,
``runlevel`` will print this value as previous
runlevel and ignore utmp.
Files
=====
``/run/utmp``
-------------
The utmp database ``runlevel`` reads the previous and current runlevel
from.
.. only:: html
.. versionadded:: 237
See Also
========
:ref:`systemd(1)`, :ref:`systemd.target(5)`, :ref:`systemctl(1)`

View File

@ -0,0 +1,17 @@
.. SPDX-License-Identifier: LGPL-2.1-or-later
:title: systemd.directives
:manvolnum: 7
.. _systemD-directives(7):
=========
systemd.directives(7)
=========
Index of configuration directives
.. list_directive_roles::

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,266 @@
.. SPDX-License-Identifier: LGPL-2.1-or-later
:orphan:
Environment
###########
.. inclusion-marker-do-not-remove log-level
``$SYSTEMD_LOG_LEVEL``
----------------------
.. inclusion-marker-do-not-remove log-level-body
The maximum log level of emitted messages (messages with a higher
log level, i.e. less important ones, will be suppressed). Takes a comma-separated list of values. A
value may be either one of (in order of decreasing importance) ``emerg``,
``alert``, ``crit``, ``err``,
``warning``, ``notice``, ``info``,
``debug``, or an integer in the range 0…7. See
`syslog(3) <https://man7.org/linux/man-pages/man3/syslog.3.html>`_
for more information. Each value may optionally be prefixed with one of ``console``,
``syslog``, ``kmsg`` or ``journal`` followed by a
colon to set the maximum log level for that specific log target (e.g.
``SYSTEMD_LOG_LEVEL=debug,console:info`` specifies to log at debug level except when
logging to the console which should be at info level). Note that the global maximum log level takes
priority over any per target maximum log levels.
.. inclusion-end-marker-do-not-remove log-level-body
.. inclusion-end-marker-do-not-remove log-level
.. inclusion-marker-do-not-remove log-color
``$SYSTEMD_LOG_COLOR``
----------------------
.. inclusion-marker-do-not-remove log-color-body
A boolean. If true, messages written to the tty will be colored
according to priority.
This setting is only useful when messages are written directly to the terminal, because
:ref:`journalctl(1)` and
other tools that display logs will color messages based on the log level on their own.
.. inclusion-end-marker-do-not-remove log-color-body
.. inclusion-end-marker-do-not-remove log-color
.. inclusion-marker-do-not-remove log-time
``$SYSTEMD_LOG_TIME``
---------------------
.. inclusion-marker-do-not-remove log-time-body
A boolean. If true, console log messages will be prefixed with a
timestamp.
This setting is only useful when messages are written directly to the terminal or a file, because
:ref:`journalctl(1)` and
other tools that display logs will attach timestamps based on the entry metadata on their own.
.. inclusion-end-marker-do-not-remove log-time-body
.. inclusion-end-marker-do-not-remove log-time
.. inclusion-marker-do-not-remove log-location
``$SYSTEMD_LOG_LOCATION``
-------------------------
.. inclusion-marker-do-not-remove log-location-body
A boolean. If true, messages will be prefixed with a filename
and line number in the source code where the message originates.
Note that the log location is often attached as metadata to journal entries anyway. Including it
directly in the message text can nevertheless be convenient when debugging programs.
.. inclusion-end-marker-do-not-remove log-location-body
.. inclusion-end-marker-do-not-remove log-location
.. inclusion-marker-do-not-remove log-tid
``$SYSTEMD_LOG_TID``
--------------------
.. inclusion-marker-do-not-remove log-tid-body
A boolean. If true, messages will be prefixed with the current
numerical thread ID (TID).
Note that the this :directive:options:const:`information` is attached as metadata to journal entries anyway. Including it
directly in the message text can nevertheless be convenient when debugging programs.
.. inclusion-end-marker-do-not-remove log-tid-body
.. inclusion-end-marker-do-not-remove log-tid
.. inclusion-marker-do-not-remove log-target
``$SYSTEMD_LOG_TARGET``
-----------------------
.. inclusion-marker-do-not-remove log-target-body
The destination for log messages. One of
``console`` (log to the attached tty), ``console-prefixed`` (log to
the attached tty but with prefixes encoding the log level and "facility", see `syslog(3) <https://man7.org/linux/man-pages/man3/syslog.3.html>`_,
``kmsg`` (log to the kernel circular log buffer), ``journal`` (log to
the journal), ``journal-or-kmsg`` (log to the journal if available, and to kmsg
otherwise), ``auto`` (determine the appropriate log target automatically, the default),
``null`` (disable log output).
.. COMMENT: <constant>syslog</constant>, <constant>syslog-or-kmsg</constant> are deprecated
.. inclusion-end-marker-do-not-remove log-target-body
.. inclusion-end-marker-do-not-remove log-target
.. inclusion-marker-do-not-remove log-ratelimit-kmsg
``$SYSTEMD_LOG_RATELIMIT_KMSG``
-------------------------------
.. inclusion-marker-do-not-remove log-ratelimit-kmsg-body
Whether to ratelimit kmsg or not. Takes a boolean.
Defaults to ``true``. If disabled, systemd will not ratelimit messages written to kmsg.
.. inclusion-end-marker-do-not-remove log-ratelimit-kmsg-body
.. inclusion-end-marker-do-not-remove log-ratelimit-kmsg
.. inclusion-marker-do-not-remove pager
``$SYSTEMD_PAGER``
------------------
.. inclusion-marker-do-not-remove pager-body
Pager to use when ``--no-pager`` is not given; overrides
``$PAGER``. If neither ``$SYSTEMD_PAGER`` nor ``$PAGER`` are set, a
set of well-known pager implementations are tried in turn, including
`less(1) <https://man7.org/linux/man-pages/man1/less.1.html>`_ and
`more(1) <https://man7.org/linux/man-pages/man1/more.1.html>`_, until one is found. If
no pager implementation is discovered no pager is invoked. Setting this environment variable to an empty string
or the value ``cat`` is equivalent to passing ``--no-pager``.
Note: if ``$SYSTEMD_PAGERSECURE`` is not set, ``$SYSTEMD_PAGER``
(as well as ``$PAGER``) will be silently ignored.
.. inclusion-end-marker-do-not-remove pager-body
.. inclusion-end-marker-do-not-remove pager
.. inclusion-marker-do-not-remove less
``$SYSTEMD_LESS``
-----------------
.. inclusion-marker-do-not-remove less-body
Override the options passed to ``less`` (by default
``FRSXMK``).
Users might want to change two options in particular:
``K``
-----
This option instructs the pager to exit immediately when
:kbd:`Ctrl` + :kbd:`C` is pressed. To allow
``less`` to handle :kbd:`Ctrl` + :kbd:`C`
itself to switch back to the pager command prompt, unset this option.
If the value of ``$SYSTEMD_LESS`` does not include ``K``,
and the pager that is invoked is ``less``,
:kbd:`Ctrl` + :kbd:`C` will be ignored by the
executable, and needs to be handled by the pager.
``X``
-----
This option instructs the pager to not send termcap initialization and deinitialization
strings to the terminal. It is set by default to allow command output to remain visible in the
terminal even after the pager exits. Nevertheless, this prevents some pager functionality from
working, in particular paged output cannot be scrolled with the mouse.
Note that setting the regular ``$LESS`` environment variable has no effect
for ``less`` invocations by systemd tools.
See
`less(1) <https://man7.org/linux/man-pages/man1/less.1.html>`_
for more discussion.
.. inclusion-end-marker-do-not-remove less-body
.. inclusion-end-marker-do-not-remove less
.. inclusion-marker-do-not-remove lesscharset
``$SYSTEMD_LESSCHARSET``
------------------------
Override the charset passed to ``less`` (by default ``utf-8``, if
the invoking terminal is determined to be UTF-8 compatible).
Note that setting the regular ``$LESSCHARSET`` environment variable has no effect
for ``less`` invocations by systemd tools.
.. inclusion-end-marker-do-not-remove lesscharset
.. inclusion-marker-do-not-remove lesssecure
``$SYSTEMD_PAGERSECURE``
------------------------
Takes a boolean argument. When true, the "secure" mode of the pager is enabled; if
false, disabled. If ``$SYSTEMD_PAGERSECURE`` is not set at all, secure mode is enabled
if the effective UID is not the same as the owner of the login session, see
`geteuid(2) <https://man7.org/linux/man-pages/man2/geteuid.2.html>`_
and :ref:`sd_pid_get_owner_uid(3)`.
In secure mode, ``LESSSECURE=1`` will be set when invoking the pager, and the pager shall
disable commands that open or create new files or start new subprocesses. When
``$SYSTEMD_PAGERSECURE`` is not set at all, pagers which are not known to implement
secure mode will not be used. (Currently only
`less(1) <https://man7.org/linux/man-pages/man1/less.1.html>`_
implements secure mode.)
Note: when commands are invoked with elevated privileges, for example under `sudo(8) <https://man7.org/linux/man-pages/man8/sudo.8.html>`_ or
`pkexec(1) <http://linux.die.net/man/ 1/pkexec>`_, care
must be taken to ensure that unintended interactive features are not enabled. "Secure" mode for the
pager may be enabled automatically as describe above. Setting ``SYSTEMD_PAGERSECURE=0``
or not removing it from the inherited environment allows the user to invoke arbitrary commands. Note
that if the ``$SYSTEMD_PAGER`` or ``$PAGER`` variables are to be
honoured, ``$SYSTEMD_PAGERSECURE`` must be set too. It might be reasonable to completely
disable the pager using ``--no-pager`` instead.
.. inclusion-end-marker-do-not-remove lesssecure
.. inclusion-marker-do-not-remove colors
``$SYSTEMD_COLORS``
-------------------
Takes a boolean argument. When true, ``systemd`` and related utilities
will use colors in their output, otherwise the output will be monochrome. Additionally, the variable can
take one of the following special values: ``16``, ``256`` to restrict the use
of colors to the base 16 or 256 ANSI colors, respectively. This can be specified to override the automatic
decision based on ``$TERM`` and what the console is connected to.
.. COMMENT: This is not documented on purpose, because it is not clear if $NO_COLOR will become supported
widely enough. So let's provide support, but without advertising this.
<varlistentry id='no-color'>
<term><varname>$NO_COLOR</varname></term>
<listitem><para>If set (to any value), and <varname>$SYSTEMD_COLORS</varname> is not set, equivalent to
<option>SYSTEMD_COLORS=0</option>. See <ulink url="https://no-color.org/">no-color.org</ulink>.</para>
</listitem>
</varlistentry>
.. inclusion-end-marker-do-not-remove colors
.. inclusion-marker-do-not-remove urlify
``$SYSTEMD_URLIFY``
-------------------
The value must be a boolean. Controls whether clickable links should be generated in
the output for terminal emulators supporting this. This can be specified to override the decision that
``systemd`` makes based on ``$TERM`` and other conditions.
.. inclusion-end-marker-do-not-remove urlify

View File

@ -0,0 +1,15 @@
.. SPDX-License-Identifier: LGPL-2.1-or-later
:orphan:
Notes
#####
Functions described here are available as a shared
library, which can be compiled against and linked to with the
``libsystemd```pkg-config(1) <http://linux.die.net/man/ 1/pkg-config>`_
file.
.. include:: ./includes/threads-aware.rst
:start-after: .. inclusion-marker-do-not-remove getenv
:end-before: .. inclusion-end-marker-do-not-remove getenv

View File

@ -0,0 +1,296 @@
:title: sd_journal_get_data
:manvolnum: 3
.. _sd_journal_get_data(3):
======================
sd_journal_get_data(3)
======================
**Name**
sd_journal_get_data — sd_journal_enumerate_data — sd_journal_enumerate_available_data — sd_journal_restart_data — SD_JOURNAL_FOREACH_DATA — sd_journal_set_data_threshold — sd_journal_get_data_threshold — Read data fields from the current journal entry
###########################################################################################################################################################################################################################################################
**Synopsis**
#include <systemd/sd-journal.h>
.. code-block::sh
int ``sd_journal_get_data``
sd_journal *j
const char *field
const void \**data
size_t *length
int ``sd_journal_enumerate_data``
sd_journal *j
const void \**data
size_t *length
int ``sd_journal_enumerate_available_data``
sd_journal *j
const void \**data
size_t *length
void ``sd_journal_restart_data``
sd_journal *j
``SD_JOURNAL_FOREACH_DATA``
sd_journal *j
const void *data
size_t length
int ``sd_journal_set_data_threshold``
sd_journal *j
size_t sz
int ``sd_journal_get_data_threshold``
sd_journal *j
size_t *sz
Description
===========
``sd_journal_get_data()`` gets the data object associated with a specific field
from the current journal entry. It takes four arguments: the journal context object, a string with the
field name to request, plus a pair of pointers to pointer/size variables where the data object and its
size shall be stored in. The field name should be an entry field name. Well-known field names are listed in
:ref:`systemd.journal-fields(7)`,
but any field can be specified. The returned data is in a read-only memory map and is only valid until
the next invocation of ``sd_journal_get_data()``,
``sd_journal_enumerate_data()``,
``sd_journal_enumerate_available_data()``, or when the read pointer is altered. Note
that the data returned will be prefixed with the field name and ``=``. Also note that, by
default, data fields larger than 64K might get truncated to 64K. This threshold may be changed and turned
off with ``sd_journal_set_data_threshold()`` (see below).
``sd_journal_enumerate_data()`` may be used
to iterate through all fields of the current entry. On each
invocation the data for the next field is returned. The order of
these fields is not defined. The data returned is in the same
format as with ``sd_journal_get_data()`` and also
follows the same life-time semantics.
``sd_journal_enumerate_available_data()`` is similar to
``sd_journal_enumerate_data()``, but silently skips any fields which may be valid, but
are too large or not supported by current implementation.
``sd_journal_restart_data()`` resets the
data enumeration index to the beginning of the entry. The next
invocation of ``sd_journal_enumerate_data()``
will return the first field of the entry again.
Note that the ``SD_JOURNAL_FOREACH_DATA()`` macro may be used as a handy wrapper
around ``sd_journal_restart_data()`` and
``sd_journal_enumerate_available_data()``.
Note that these functions will not work before
:ref:`sd_journal_next(3)`
(or related call) has been called at least once, in order to
position the read pointer at a valid entry.
``sd_journal_set_data_threshold()`` may be
used to change the data field size threshold for data returned by
``sd_journal_get_data()``,
``sd_journal_enumerate_data()`` and
``sd_journal_enumerate_unique()``. This threshold
is a hint only: it indicates that the client program is interested
only in the initial parts of the data fields, up to the threshold
in size — but the library might still return larger data objects.
That means applications should not rely exclusively on this
setting to limit the size of the data fields returned, but need to
apply an explicit size limit on the returned data as well. This
threshold defaults to 64K by default. To retrieve the complete
data fields this threshold should be turned off by setting it to
0, so that the library always returns the complete data objects.
It is recommended to set this threshold as low as possible since
this relieves the library from having to decompress large
compressed data objects in full.
``sd_journal_get_data_threshold()`` returns
the currently configured data field size threshold.
Return Value
============
``sd_journal_get_data()`` returns 0 on success or a negative errno-style error
code. ``sd_journal_enumerate_data()`` and
``sd_journal_enumerate_available_data()`` return a positive integer if the next field
has been read, 0 when no more fields remain, or a negative errno-style error code.
``sd_journal_restart_data()`` doesn't return anything.
``sd_journal_set_data_threshold()`` and ``sd_journal_get_threshold()``
return 0 on success or a negative errno-style error code.
Errors
------
Returned errors may indicate the following problems:
.. inclusion-marker-do-not-remove EINVAL
-EINVAL
-------
One of the required parameters is ``NULL`` or invalid.
.. versionadded:: 246
.. inclusion-end-marker-do-not-remove EINVAL
.. inclusion-marker-do-not-remove ECHILD
-ECHILD
-------
The journal object was created in a different process, library or module instance.
.. versionadded:: 246
.. inclusion-end-marker-do-not-remove ECHILD
.. inclusion-marker-do-not-remove EADDRNOTAVAIL
-EADDRNOTAVAIL
--------------
The read pointer is not positioned at a valid entry;
:ref:`sd_journal_next(3)`
or a related call has not been called at least once.
.. versionadded:: 246
.. inclusion-end-marker-do-not-remove EADDRNOTAVAIL
.. inclusion-marker-do-not-remove ENOENT
-ENOENT
-------
The current entry does not include the specified field.
.. versionadded:: 246
.. inclusion-end-marker-do-not-remove ENOENT
.. inclusion-marker-do-not-remove ENOMEM
-ENOMEM
-------
Memory allocation failed.
.. versionadded:: 246
.. inclusion-end-marker-do-not-remove ENOMEM
.. inclusion-marker-do-not-remove ENOBUFS
-ENOBUFS
--------
A compressed entry is too large.
.. versionadded:: 246
.. inclusion-end-marker-do-not-remove ENOBUFS
.. inclusion-marker-do-not-remove E2BIG
-E2BIG
------
The data field is too large for this computer architecture (e.g. above 4 GB on a
32-bit architecture).
.. versionadded:: 246
.. inclusion-end-marker-do-not-remove E2BIG
.. inclusion-marker-do-not-remove EPROTONOSUPPORT
-EPROTONOSUPPORT
----------------
The journal is compressed with an unsupported method or the journal uses an
unsupported feature.
.. versionadded:: 246
.. inclusion-end-marker-do-not-remove EPROTONOSUPPORT
.. inclusion-marker-do-not-remove EBADMSG
-EBADMSG
--------
The journal is corrupted (possibly just the entry being iterated over).
.. versionadded:: 246
.. inclusion-end-marker-do-not-remove EBADMSG
.. inclusion-marker-do-not-remove EIO
-EIO
----
An I/O error was reported by the kernel.
.. versionadded:: 246
.. inclusion-end-marker-do-not-remove EIO
Notes
=====
.. include:: ./threads-aware.rst
:start-after: .. inclusion-marker-do-not-remove strict
:end-before: .. inclusion-end-marker-do-not-remove strict
.. include:: ./libsystemd-pkgconfig.rst
:start-after: .. inclusion-marker-do-not-remove pkgconfig-text
:end-before: .. inclusion-end-marker-do-not-remove pkgconfig-text
Examples
========
See
:ref:`sd_journal_next(3)`
for a complete example how to use
``sd_journal_get_data()``.
Use the
``SD_JOURNAL_FOREACH_DATA()`` macro to
iterate through all fields of the current journal
entry:
.. code-block:: sh
int print_fields(sd_journal *j) {
const void *data;
size_t length;
SD_JOURNAL_FOREACH_DATA(j, data, length)
printf("%.*s\n", (int) length, data);
}
History
=======
``sd_journal_get_data()``,
``sd_journal_enumerate_data()``,
``sd_journal_restart_data()``, and
``SD_JOURNAL_FOREACH_DATA()`` were added in version 187.
``sd_journal_set_data_threshold()`` and
``sd_journal_get_data_threshold()`` were added in version 196.
``sd_journal_enumerate_available_data()`` was added in version 246.
See Also
========
:ref:`systemd(1)`, :ref:`systemd.journal-fields(7)`, :ref:`sd-journal(3)`, :ref:`sd_journal_open(3)`, :ref:`sd_journal_next(3)`, :ref:`sd_journal_get_realtime_usec(3)`, :ref:`sd_journal_query_unique(3)`

View File

@ -0,0 +1,164 @@
.. SPDX-License-Identifier: LGPL-2.1-or-later
:orphan:
.. inclusion-marker-do-not-remove help
``-h, --help``
--------------
Print a short help text and exit.
.. inclusion-end-marker-do-not-remove help
.. inclusion-marker-do-not-remove version
``--version``
-------------
Print a short version string and exit.
.. inclusion-end-marker-do-not-remove version
.. inclusion-marker-do-not-remove no-pager
``--no-pager``
--------------
Do not pipe output into a pager.
.. inclusion-end-marker-do-not-remove no-pager
.. inclusion-marker-do-not-remove no-ask-password
``--no-ask-password``
---------------------
Do not query the user for authentication for privileged operations.
.. inclusion-end-marker-do-not-remove no-ask-password
.. inclusion-marker-do-not-remove legend
``--legend=<BOOL>``
-------------------
Enable or disable printing of the legend, i.e. column headers and the footer with hints. The
legend is printed by default, unless disabled with ``--quiet`` or similar.
.. inclusion-end-marker-do-not-remove legend
.. inclusion-marker-do-not-remove no-legend
``--no-legend``
---------------
Do not print the legend, i.e. column headers and the
footer with hints.
.. inclusion-end-marker-do-not-remove no-legend
.. inclusion-marker-do-not-remove cat-config
``--cat-config``
----------------
Copy the contents of config files to standard output.
Before each file, the filename is printed as a comment.
.. inclusion-end-marker-do-not-remove cat-config
.. inclusion-marker-do-not-remove tldr
``--tldr``
----------
Copy the contents of config files to standard output. Only the "interesting" parts of the
configuration files are printed, comments and empty lines are skipped. Before each file, the filename
is printed as a comment.
.. inclusion-end-marker-do-not-remove tldr
.. inclusion-marker-do-not-remove json
``--json=<MODE>``
-----------------
Shows output formatted as JSON. Expects one of ``short`` (for the
shortest possible output without any redundant whitespace or line breaks), ``pretty``
(for a pretty version of the same, with indentation and line breaks) or ``off`` (to turn
off JSON output, the default).
.. inclusion-end-marker-do-not-remove json
.. inclusion-marker-do-not-remove j
``-j``
------
Equivalent to ``--json=pretty`` if running on a terminal, and
``--json=short`` otherwise.
.. inclusion-end-marker-do-not-remove j
.. inclusion-marker-do-not-remove signal
``-s, --signal=``
-----------------
When used with ``kill``, choose which signal to send to selected processes. Must
be one of the well-known signal specifiers such as ``SIGTERM``,
``SIGINT`` or ``SIGSTOP``. If omitted, defaults to
``SIGTERM``.
The special value ``help`` will list the known values and the program will exit
immediately, and the special value ``list`` will list known values along with the
numerical signal numbers and the program will exit immediately.
.. inclusion-end-marker-do-not-remove signal
.. inclusion-marker-do-not-remove image-policy-open
``--image-policy=<policy>``
---------------------------
Takes an image policy string as argument, as per
:ref:`systemd.image-policy(7)`. The
policy is enforced when operating on the disk image specified via ``--image=``, see
above. If not specified defaults to the ``*`` policy, i.e. all recognized file systems
in the image are used.
.. inclusion-end-marker-do-not-remove image-policy-open
.. inclusion-marker-do-not-remove esp-path
``--esp-path=``
---------------
Path to the EFI System Partition (ESP). If not specified, ``/efi/``,
``/boot/``, and ``/boot/efi/`` are checked in turn. It is
recommended to mount the ESP to ``/efi/``, if possible.
.. inclusion-end-marker-do-not-remove esp-path
.. inclusion-marker-do-not-remove boot-path
``--boot-path=``
----------------
Path to the Extended Boot Loader partition, as defined in the
`Boot Loader Specification <https://uapi-group.org/specifications/specs/boot_loader_specification>`_.
If not specified, ``/boot/`` is checked. It is recommended to mount the Extended Boot
Loader partition to ``/boot/``, if possible.
.. inclusion-end-marker-do-not-remove boot-path
.. inclusion-marker-do-not-remove option-P
``-P``
------
Equivalent to ``--value`` ``--property=``, i.e. shows the value of the
property without the property name or ``=``. Note that using ``-P`` once
will also affect all properties listed with ``-p``/``--property=``.
.. inclusion-end-marker-do-not-remove option-P

View File

@ -0,0 +1,28 @@
.. SPDX-License-Identifier: LGPL-2.1-or-later:
.. inclusion-marker-do-not-remove strict
All functions listed here are thread-agnostic and only a single specific thread may operate on a
given object during its entire lifetime. It's safe to allocate multiple independent objects and use each from a
specific thread in parallel. However, it's not safe to allocate such an object in one thread, and operate or free it
from any other, even if locking is used to ensure these threads don't operate on it at the very same time.
.. inclusion-end-marker-do-not-remove strict
.. inclusion-marker-do-not-remove safe
All functions listed here are thread-safe and may be called in parallel from multiple threads.
.. inclusion-end-marker-do-not-remove safe
.. inclusion-marker-do-not-remove getenv
The code described here uses
:man-pages:`getenv(3)`,
which is declared to be not multi-thread-safe. This means that the code calling the functions described
here must not call
:man-pages:`setenv(3)`
from a parallel thread. It is recommended to only do calls to ``setenv()``
from an early phase of the program when no other threads have been started.
.. inclusion-end-marker-do-not-remove getenv

View File

@ -0,0 +1,70 @@
.. SPDX-License-Identifier: LGPL-2.1-or-later
:orphan:
.. inclusion-marker-do-not-remove user
``--user``
----------
Talk to the service manager of the calling user,
rather than the service manager of the system.
.. inclusion-end-marker-do-not-remove user
.. inclusion-marker-do-not-remove system
``--system``
------------
Talk to the service manager of the system. This is the
implied default.
.. inclusion-end-marker-do-not-remove system
.. inclusion-marker-do-not-remove host
``-H, --host=``
---------------
Execute the operation remotely. Specify a hostname, or a
username and hostname separated by ``@``, to
connect to. The hostname may optionally be suffixed by a
port ssh is listening on, separated by ``:``, and then a
container name, separated by ``/``, which
connects directly to a specific container on the specified
host. This will use SSH to talk to the remote machine manager
instance. Container names may be enumerated with
``machinectl -H
<HOST>``. Put IPv6 addresses in brackets.
.. inclusion-end-marker-do-not-remove host
.. inclusion-marker-do-not-remove machine
``-M, --machine=``
------------------
Execute operation on a local container. Specify a container name to connect to, optionally
prefixed by a user name to connect as and a separating ``@`` character. If the special
string ``.host`` is used in place of the container name, a connection to the local
system is made (which is useful to connect to a specific user's user bus: ``--user
--machine=lennart@.host``). If the ``@`` syntax is not used, the connection is
made as root user. If the ``@`` syntax is used either the left hand side or the right hand
side may be omitted (but not both) in which case the local user name and ``.host`` are
implied.
.. inclusion-end-marker-do-not-remove machine
.. inclusion-marker-do-not-remove capsule
``-C, --capsule=``
------------------
Execute operation on a capsule. Specify a capsule name to connect to. See
:ref:`capsule@.service(5)` for
details about capsules.
.. versionadded:: 256
.. inclusion-end-marker-do-not-remove capsule

View File

@ -0,0 +1,39 @@
.. SPDX-License-Identifier: LGPL-2.1-or-later
.. systemd documentation master file, created by
sphinx-quickstart on Wed Jun 26 16:24:13 2024.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
systemd — System and Service Manager
===================================
.. manual reference to a doc by its reference label
see: https://www.sphinx-doc.org/en/master/usage/referencing.html#cross-referencing-arbitrary-locations
.. Manual links
.. ------------
.. :ref:`busctl(1)`
.. :ref:`systemd(1)`
.. OR using the toctree to pull in files
https://www.sphinx-doc.org/en/master/usage/restructuredtext/directives.html#directive-toctree
.. This only works if we restructure our headings to match
https://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html#sections
and then only have single top-level heading with the command name
.. toctree::
:maxdepth: 1
docs/busctl
docs/runlevel
docs/journalctl
docs/os-release
docs/systemd
docs/systemD-directives
docs/repart.d
docs/includes/sd_journal_get_data
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

View File

@ -962,7 +962,9 @@ default ignore - -</programlisting>
discovered/supported/used, prints <literal>no</literal>. Otherwise prints
<literal>partial</literal>. In either of these two cases exits with non-zero exit status. It also shows
five lines indicating separately whether firmware, drivers, the system, the kernel and libraries
discovered/support/use TPM2.</para>
discovered/support/use TPM2. Currently, required libraries are <filename>libtss2-esys.so.0</filename>,
<filename>libtss2-rc.so.0</filename>, and <filename>libtss2-mu.so.0</filename>. The requirement may be
changed in the future release.</para>
<para>Note, this checks for TPM 2.0 devices only, and does not consider TPM 1.2 at all.</para>

View File

@ -29,7 +29,7 @@
<refsect1>
<title>Description</title>
<para><command>systemd-nsresourced</command> is a system service that permits transient delegation of a a
<para><command>systemd-nsresourced</command> is a system service that permits transient delegation of a
UID/GID range to a user namespace (see <citerefentry
project='man-pages'><refentrytitle>user_namespaces</refentrytitle><manvolnum>7</manvolnum></citerefentry>)
allocated by a client, via a Varlink IPC API.</para>

View File

@ -3,18 +3,11 @@
set -e
set -o nounset
if [[ "$DISTRIBUTION" =~ ubuntu|debian ]]; then
SUDO_GROUP=sudo
else
SUDO_GROUP=wheel
fi
useradd \
--uid 4711 \
--user-group \
--create-home \
--password "$(openssl passwd -1 testuser)" \
--groups "$SUDO_GROUP",systemd-journal \
--shell /bin/bash \
testuser

View File

@ -67,7 +67,7 @@ _systemd_analyze() {
)
local -A VERBS=(
[STANDALONE]='time blame unit-files unit-paths exit-status compare-versions calendar timestamp timespan pcrs srk'
[STANDALONE]='time blame unit-files unit-paths exit-status compare-versions calendar timestamp timespan pcrs srk has-tpm2'
[CRITICAL_CHAIN]='critical-chain'
[DOT]='dot'
[DUMP]='dump'

View File

@ -73,6 +73,7 @@ JSON or table format'
'timespan:Parse a systemd syntax timespan'
'security:Analyze security settings of a service'
'inspect-elf:Parse and print ELF package metadata'
'has-tpm2:Report whether TPM2 support is available'
# log-level, log-target, service-watchdogs have been deprecated
)

View File

@ -96,7 +96,7 @@ int verb_pcrs(int argc, char *argv[], void *userdata) {
const char *alg = NULL;
int r;
if (tpm2_support() != TPM2_SUPPORT_FULL)
if (!tpm2_is_fully_supported())
log_notice("System lacks full TPM2 support, not showing PCR state.");
else {
r = get_pcr_alg(&alg);

View File

@ -411,7 +411,6 @@ int verb_status(int argc, char *argv[], void *userdata) {
_cleanup_free_ char *fw_type = NULL, *fw_info = NULL, *loader = NULL, *loader_path = NULL, *stub = NULL, *stub_path = NULL,
*current_entry = NULL, *oneshot_entry = NULL, *default_entry = NULL;
uint64_t loader_features = 0, stub_features = 0;
Tpm2Support s;
int have;
(void) efi_get_variable_string_and_warn(EFI_LOADER_VARIABLE(LoaderFirmwareType), &fw_type);
@ -440,7 +439,7 @@ int verb_status(int argc, char *argv[], void *userdata) {
else
printf("\n");
s = tpm2_support();
Tpm2Support s = tpm2_support_full(TPM2_SUPPORT_FIRMWARE|TPM2_SUPPORT_DRIVER);
printf(" TPM2 Support: %s%s%s\n",
FLAGS_SET(s, TPM2_SUPPORT_FIRMWARE|TPM2_SUPPORT_DRIVER) ? ansi_highlight_green() :
(s & (TPM2_SUPPORT_FIRMWARE|TPM2_SUPPORT_DRIVER)) != 0 ? ansi_highlight_red() : ansi_highlight_yellow(),

View File

@ -1005,7 +1005,7 @@ static int validate_stub(void) {
bool found = false;
int r;
if (tpm2_support() != TPM2_SUPPORT_FULL)
if (!tpm2_is_fully_supported())
return log_error_errno(SYNTHETIC_ERRNO(EOPNOTSUPP), "Sorry, system lacks full TPM2 support.");
r = efi_stub_get_features(&features);

View File

@ -1229,7 +1229,7 @@ static int generic_method_get_interface_description(
sd_varlink_method_flags_t flags,
void *userdata) {
static const struct sd_json_dispatch_field dispatch_table[] = {
static const sd_json_dispatch_field dispatch_table[] = {
{ "interface", SD_JSON_VARIANT_STRING, sd_json_dispatch_const_string, 0, SD_JSON_MANDATORY },
{}
};

View File

@ -416,19 +416,18 @@ static int list_machine_one(sd_varlink *link, Machine *m, bool more) {
}
static int vl_method_list(sd_varlink *link, sd_json_variant *parameters, sd_varlink_method_flags_t flags, void *userdata) {
Manager *m = ASSERT_PTR(userdata);
const char *mn = NULL;
const sd_json_dispatch_field dispatch_table[] = {
{ "name", SD_JSON_VARIANT_STRING, sd_json_dispatch_const_string, PTR_TO_SIZE(&mn), 0 },
static const sd_json_dispatch_field dispatch_table[] = {
{ "name", SD_JSON_VARIANT_STRING, sd_json_dispatch_const_string, 0, 0 },
{}
};
Manager *m = ASSERT_PTR(userdata);
const char *mn = NULL;
int r;
assert(parameters);
r = sd_varlink_dispatch(link, parameters, dispatch_table, 0);
r = sd_varlink_dispatch(link, parameters, dispatch_table, &mn);
if (r != 0)
return r;

View File

@ -97,8 +97,8 @@ static int neighbor_append_json(Neighbor *n, sd_json_variant **array) {
return sd_json_variant_append_arraybo(
array,
SD_JSON_BUILD_PAIR_INTEGER("Family", n->family),
JSON_BUILD_PAIR_IN_ADDR("Destination", &n->in_addr, n->family),
SD_JSON_BUILD_PAIR_INTEGER("Family", n->dst_addr.family),
JSON_BUILD_PAIR_IN_ADDR("Destination", &n->dst_addr.address, n->dst_addr.family),
JSON_BUILD_PAIR_HW_ADDR("LinkLayerAddress", &n->ll_addr),
SD_JSON_BUILD_PAIR_STRING("ConfigSource", network_config_source_to_string(n->source)),
SD_JSON_BUILD_PAIR_STRING("ConfigState", state));
@ -168,7 +168,7 @@ static int nexthop_append_json(NextHop *n, sd_json_variant **array) {
return sd_json_variant_append_arraybo(
array,
SD_JSON_BUILD_PAIR_UNSIGNED("ID", n->id),
JSON_BUILD_PAIR_IN_ADDR_NON_NULL("Gateway", &n->gw, n->family),
JSON_BUILD_PAIR_IN_ADDR_NON_NULL("Gateway", &n->gw.address, n->family),
SD_JSON_BUILD_PAIR_UNSIGNED("Flags", n->flags),
SD_JSON_BUILD_PAIR_STRING("FlagsString", strempty(flags)),
SD_JSON_BUILD_PAIR_UNSIGNED("Protocol", n->protocol),

View File

@ -147,29 +147,29 @@ static int neighbor_dup(const Neighbor *neighbor, Neighbor **ret) {
static void neighbor_hash_func(const Neighbor *neighbor, struct siphash *state) {
assert(neighbor);
siphash24_compress_typesafe(neighbor->family, state);
siphash24_compress_typesafe(neighbor->dst_addr.family, state);
if (!IN_SET(neighbor->family, AF_INET, AF_INET6))
if (!IN_SET(neighbor->dst_addr.family, AF_INET, AF_INET6))
/* treat any other address family as AF_UNSPEC */
return;
/* Equality of neighbors are given by the destination address.
* See neigh_lookup() in the kernel. */
in_addr_hash_func(&neighbor->in_addr, neighbor->family, state);
in_addr_hash_func(&neighbor->dst_addr.address, neighbor->dst_addr.family, state);
}
static int neighbor_compare_func(const Neighbor *a, const Neighbor *b) {
int r;
r = CMP(a->family, b->family);
r = CMP(a->dst_addr.family, b->dst_addr.family);
if (r != 0)
return r;
if (!IN_SET(a->family, AF_INET, AF_INET6))
if (!IN_SET(a->dst_addr.family, AF_INET, AF_INET6))
/* treat any other address family as AF_UNSPEC */
return 0;
return memcmp(&a->in_addr, &b->in_addr, FAMILY_ADDRESS_SIZE(a->family));
return memcmp(&a->dst_addr.address, &b->dst_addr.address, FAMILY_ADDRESS_SIZE(a->dst_addr.family));
}
static int neighbor_get_request(Link *link, const Neighbor *neighbor, Request **ret) {
@ -244,7 +244,7 @@ static void log_neighbor_debug(const Neighbor *neighbor, const char *str, const
"%s %s neighbor (%s): lladdr: %s, dst: %s",
str, strna(network_config_source_to_string(neighbor->source)), strna(state),
HW_ADDR_TO_STR(&neighbor->ll_addr),
IN_ADDR_TO_STRING(neighbor->family, &neighbor->in_addr));
IN_ADDR_TO_STRING(neighbor->dst_addr.family, &neighbor->dst_addr.address));
}
static int neighbor_configure(Neighbor *neighbor, Link *link, Request *req) {
@ -261,7 +261,7 @@ static int neighbor_configure(Neighbor *neighbor, Link *link, Request *req) {
log_neighbor_debug(neighbor, "Configuring", link);
r = sd_rtnl_message_new_neigh(link->manager->rtnl, &m, RTM_NEWNEIGH,
link->ifindex, neighbor->family);
link->ifindex, neighbor->dst_addr.family);
if (r < 0)
return r;
@ -273,7 +273,7 @@ static int neighbor_configure(Neighbor *neighbor, Link *link, Request *req) {
if (r < 0)
return r;
r = netlink_message_append_in_addr_union(m, NDA_DST, neighbor->family, &neighbor->in_addr);
r = netlink_message_append_in_addr_union(m, NDA_DST, neighbor->dst_addr.family, &neighbor->dst_addr.address);
if (r < 0)
return r;
@ -338,7 +338,7 @@ static int link_request_neighbor(Link *link, const Neighbor *neighbor) {
"The link layer address length (%zu) for neighbor %s does not match with "
"the hardware address length (%zu), ignoring the setting.",
neighbor->ll_addr.length,
IN_ADDR_TO_STRING(neighbor->family, &neighbor->in_addr),
IN_ADDR_TO_STRING(neighbor->dst_addr.family, &neighbor->dst_addr.address),
link->hw_addr.length);
return 0;
}
@ -451,11 +451,11 @@ int neighbor_remove(Neighbor *neighbor, Link *link) {
log_neighbor_debug(neighbor, "Removing", link);
r = sd_rtnl_message_new_neigh(link->manager->rtnl, &m, RTM_DELNEIGH,
link->ifindex, neighbor->family);
link->ifindex, neighbor->dst_addr.family);
if (r < 0)
return log_link_error_errno(link, r, "Could not allocate RTM_DELNEIGH message: %m");
r = netlink_message_append_in_addr_union(m, NDA_DST, neighbor->family, &neighbor->in_addr);
r = netlink_message_append_in_addr_union(m, NDA_DST, neighbor->dst_addr.family, &neighbor->dst_addr.address);
if (r < 0)
return log_link_error_errno(link, r, "Could not append NDA_DST attribute: %m");
@ -593,19 +593,19 @@ int manager_rtnl_process_neighbor(sd_netlink *rtnl, sd_netlink_message *message,
return log_oom();
/* First, retrieve the fundamental information about the neighbor. */
r = sd_rtnl_message_neigh_get_family(message, &tmp->family);
r = sd_rtnl_message_neigh_get_family(message, &tmp->dst_addr.family);
if (r < 0) {
log_link_warning(link, "rtnl: received neighbor message without family, ignoring.");
return 0;
}
if (tmp->family == AF_BRIDGE) /* Currently, we do not support it. */
if (tmp->dst_addr.family == AF_BRIDGE) /* Currently, we do not support it. */
return 0;
if (!IN_SET(tmp->family, AF_INET, AF_INET6)) {
log_link_debug(link, "rtnl: received neighbor message with invalid family '%i', ignoring.", tmp->family);
if (!IN_SET(tmp->dst_addr.family, AF_INET, AF_INET6)) {
log_link_debug(link, "rtnl: received neighbor message with invalid family '%i', ignoring.", tmp->dst_addr.family);
return 0;
}
r = netlink_message_read_in_addr_union(message, NDA_DST, tmp->family, &tmp->in_addr);
r = netlink_message_read_in_addr_union(message, NDA_DST, tmp->dst_addr.family, &tmp->dst_addr.address);
if (r < 0) {
log_link_warning_errno(link, r, "rtnl: received neighbor message without valid address, ignoring: %m");
return 0;
@ -660,28 +660,28 @@ int manager_rtnl_process_neighbor(sd_netlink *rtnl, sd_netlink_message *message,
return 1;
}
#define log_neighbor_section(neighbor, fmt, ...) \
({ \
const Neighbor *_neighbor = (neighbor); \
log_section_warning_errno( \
_neighbor ? _neighbor->section : NULL, \
SYNTHETIC_ERRNO(EINVAL), \
fmt " Ignoring [Neighbor] section.", \
##__VA_ARGS__); \
})
static int neighbor_section_verify(Neighbor *neighbor) {
if (section_is_invalid(neighbor->section))
return -EINVAL;
if (neighbor->family == AF_UNSPEC)
return log_warning_errno(SYNTHETIC_ERRNO(EINVAL),
"%s: Neighbor section without Address= configured. "
"Ignoring [Neighbor] section from line %u.",
neighbor->section->filename, neighbor->section->line);
if (neighbor->dst_addr.family == AF_UNSPEC)
return log_neighbor_section(neighbor, "Neighbor section without Address= configured.");
if (neighbor->family == AF_INET6 && !socket_ipv6_is_supported())
return log_warning_errno(SYNTHETIC_ERRNO(EINVAL),
"%s: Neighbor section with an IPv6 destination address configured, "
"but the kernel does not support IPv6. "
"Ignoring [Neighbor] section from line %u.",
neighbor->section->filename, neighbor->section->line);
if (neighbor->dst_addr.family == AF_INET6 && !socket_ipv6_is_supported())
return log_neighbor_section(neighbor, "Neighbor section with an IPv6 destination address configured, but the kernel does not support IPv6.");
if (neighbor->ll_addr.length == 0)
return log_warning_errno(SYNTHETIC_ERRNO(EINVAL),
"%s: Neighbor section without LinkLayerAddress= configured. "
"Ignoring [Neighbor] section from line %u.",
neighbor->section->filename, neighbor->section->line);
return log_neighbor_section(neighbor, "Neighbor section without LinkLayerAddress= configured.");
return 0;
}
@ -709,7 +709,7 @@ int network_drop_invalid_neighbors(Network *network) {
log_warning("%s: Duplicated neighbor settings for %s is specified at line %u and %u, "
"dropping the neighbor setting specified at line %u.",
dup->section->filename,
IN_ADDR_TO_STRING(neighbor->family, &neighbor->in_addr),
IN_ADDR_TO_STRING(neighbor->dst_addr.family, &neighbor->dst_addr.address),
neighbor->section->line,
dup->section->line, dup->section->line);
/* neighbor_detach() will drop the neighbor from neighbors_by_section. */
@ -728,7 +728,7 @@ int network_drop_invalid_neighbors(Network *network) {
}
int config_parse_neighbor_address(
int config_parse_neighbor_section(
const char *unit,
const char *filename,
unsigned line,
@ -740,76 +740,26 @@ int config_parse_neighbor_address(
void *data,
void *userdata) {
_cleanup_(neighbor_unref_or_set_invalidp) Neighbor *n = NULL;
static const ConfigSectionParser table[_NEIGHBOR_CONF_PARSER_MAX] = {
[NEIGHBOR_DESTINATION_ADDRESS] = { .parser = config_parse_in_addr_data, .ltype = 0, .offset = offsetof(Neighbor, dst_addr), },
[NEIGHBOR_LINK_LAYER_ADDRESS] = { .parser = config_parse_hw_addr, .ltype = 0, .offset = offsetof(Neighbor, ll_addr), },
};
_cleanup_(neighbor_unref_or_set_invalidp) Neighbor *neighbor = NULL;
Network *network = ASSERT_PTR(userdata);
int r;
assert(filename);
assert(section);
assert(lvalue);
assert(rvalue);
r = neighbor_new_static(network, filename, section_line, &n);
r = neighbor_new_static(network, filename, section_line, &neighbor);
if (r < 0)
return log_oom();
if (isempty(rvalue)) {
n->family = AF_UNSPEC;
n->in_addr = IN_ADDR_NULL;
TAKE_PTR(n);
return 0;
}
r = config_section_parse(table, ELEMENTSOF(table),
unit, filename, line, section, section_line, lvalue, ltype, rvalue, neighbor);
if (r <= 0) /* 0 means non-critical error, but the section will be ignored. */
return r;
r = in_addr_from_string_auto(rvalue, &n->family, &n->in_addr);
if (r < 0) {
log_syntax(unit, LOG_WARNING, filename, line, r,
"Neighbor Address is invalid, ignoring assignment: %s", rvalue);
return 0;
}
TAKE_PTR(n);
return 0;
}
int config_parse_neighbor_lladdr(
const char *unit,
const char *filename,
unsigned line,
const char *section,
unsigned section_line,
const char *lvalue,
int ltype,
const char *rvalue,
void *data,
void *userdata) {
_cleanup_(neighbor_unref_or_set_invalidp) Neighbor *n = NULL;
Network *network = ASSERT_PTR(userdata);
int r;
assert(filename);
assert(section);
assert(lvalue);
assert(rvalue);
r = neighbor_new_static(network, filename, section_line, &n);
if (r < 0)
return log_oom();
if (isempty(rvalue)) {
n->ll_addr = HW_ADDR_NULL;
TAKE_PTR(n);
return 0;
}
r = parse_hw_addr(rvalue, &n->ll_addr);
if (r < 0) {
log_syntax(unit, LOG_WARNING, filename, line, r,
"Neighbor %s= is invalid, ignoring assignment: %s",
lvalue, rvalue);
return 0;
}
TAKE_PTR(n);
TAKE_PTR(neighbor);
return 0;
}

View File

@ -23,8 +23,7 @@ typedef struct Neighbor {
unsigned n_ref;
int family;
union in_addr_union in_addr;
struct in_addr_data dst_addr;
struct hw_addr_data ll_addr;
} Neighbor;
@ -46,5 +45,11 @@ int manager_rtnl_process_neighbor(sd_netlink *rtnl, sd_netlink_message *message,
DEFINE_NETWORK_CONFIG_STATE_FUNCTIONS(Neighbor, neighbor);
CONFIG_PARSER_PROTOTYPE(config_parse_neighbor_address);
CONFIG_PARSER_PROTOTYPE(config_parse_neighbor_lladdr);
typedef enum NeighborConfParserType {
NEIGHBOR_DESTINATION_ADDRESS,
NEIGHBOR_LINK_LAYER_ADDRESS,
_NEIGHBOR_CONF_PARSER_MAX,
_NEIGHBOR_CONF_PARSER_INVALID = -EINVAL,
} NeighborConfParserType;
CONFIG_PARSER_PROTOTYPE(config_parse_neighbor_section);

View File

@ -173,9 +173,9 @@ Address.NetLabel, config_parse_address_section,
Address.NFTSet, config_parse_address_section, ADDRESS_NFT_SET, 0
IPv6AddressLabel.Prefix, config_parse_ipv6_address_label_section, IPV6_ADDRESS_LABEL_PREFIX, 0
IPv6AddressLabel.Label, config_parse_ipv6_address_label_section, IPV6_ADDRESS_LABEL, 0
Neighbor.Address, config_parse_neighbor_address, 0, 0
Neighbor.LinkLayerAddress, config_parse_neighbor_lladdr, 0, 0
Neighbor.MACAddress, config_parse_neighbor_lladdr, 0, 0 /* deprecated */
Neighbor.Address, config_parse_neighbor_section, NEIGHBOR_DESTINATION_ADDRESS, 0
Neighbor.LinkLayerAddress, config_parse_neighbor_section, NEIGHBOR_LINK_LAYER_ADDRESS, 0
Neighbor.MACAddress, config_parse_neighbor_section, NEIGHBOR_LINK_LAYER_ADDRESS, 0 /* deprecated */
RoutingPolicyRule.TypeOfService, config_parse_routing_policy_rule, ROUTING_POLICY_RULE_TOS, 0
RoutingPolicyRule.Priority, config_parse_routing_policy_rule, ROUTING_POLICY_RULE_PRIORITY, 0
RoutingPolicyRule.GoTo, config_parse_routing_policy_rule, ROUTING_POLICY_RULE_GOTO, 0
@ -219,12 +219,12 @@ Route.QuickAck, config_parse_route_metric_boolean,
Route.TCPCongestionControlAlgorithm, config_parse_route_metric_tcp_congestion, RTAX_CC_ALGO, 0
Route.FastOpenNoCookie, config_parse_route_metric_boolean, RTAX_FASTOPEN_NO_COOKIE, 0
Route.TTLPropagate, config_parse_warn_compat, DISABLED_LEGACY, 0
NextHop.Id, config_parse_nexthop_id, 0, 0
NextHop.Gateway, config_parse_nexthop_gateway, 0, 0
NextHop.Family, config_parse_nexthop_family, 0, 0
NextHop.OnLink, config_parse_nexthop_onlink, 0, 0
NextHop.Blackhole, config_parse_nexthop_blackhole, 0, 0
NextHop.Group, config_parse_nexthop_group, 0, 0
NextHop.Id, config_parse_nexthop_section, NEXTHOP_ID, 0
NextHop.Gateway, config_parse_nexthop_section, NEXTHOP_GATEWAY, 0
NextHop.Family, config_parse_nexthop_section, NEXTHOP_FAMILY, 0
NextHop.OnLink, config_parse_nexthop_section, NEXTHOP_ONLINK, 0
NextHop.Blackhole, config_parse_nexthop_section, NEXTHOP_BLACKHOLE, 0
NextHop.Group, config_parse_nexthop_section, NEXTHOP_GROUP, 0
DHCPv4.RequestAddress, config_parse_in_addr_non_null, AF_INET, offsetof(Network, dhcp_request_address)
DHCPv4.ClientIdentifier, config_parse_dhcp_client_identifier, 0, offsetof(Network, dhcp_client_identifier)
DHCPv4.UseDNS, config_parse_tristate, 0, offsetof(Network, dhcp_use_dns)

View File

@ -854,7 +854,7 @@ bool network_has_static_ipv6_configurations(Network *network) {
return true;
ORDERED_HASHMAP_FOREACH(neighbor, network->neighbors_by_section)
if (neighbor->family == AF_INET6)
if (neighbor->dst_addr.family == AF_INET6)
return true;
if (!hashmap_isempty(network->address_labels_by_section))

View File

@ -236,7 +236,7 @@ static int nexthop_compare_full(const NextHop *a, const NextHop *b) {
return r;
if (IN_SET(a->family, AF_INET, AF_INET6)) {
r = memcmp(&a->gw, &b->gw, FAMILY_ADDRESS_SIZE(a->family));
r = memcmp(&a->gw.address, &b->gw.address, FAMILY_ADDRESS_SIZE(a->family));
if (r != 0)
return r;
}
@ -481,7 +481,7 @@ static void log_nexthop_debug(const NextHop *nexthop, const char *str, Manager *
log_link_debug(link, "%s %s nexthop (%s): id: %"PRIu32", gw: %s, blackhole: %s, group: %s, flags: %s",
str, strna(network_config_source_to_string(nexthop->source)), strna(state),
nexthop->id,
IN_ADDR_TO_STRING(nexthop->family, &nexthop->gw),
IN_ADDR_TO_STRING(nexthop->family, &nexthop->gw.address),
yes_no(nexthop->blackhole), strna(group), strna(flags));
}
@ -627,8 +627,8 @@ static int nexthop_configure(NextHop *nexthop, Link *link, Request *req) {
if (r < 0)
return r;
if (in_addr_is_set(nexthop->family, &nexthop->gw)) {
r = netlink_message_append_in_addr_union(m, NHA_GATEWAY, nexthop->family, &nexthop->gw);
if (in_addr_is_set(nexthop->family, &nexthop->gw.address)) {
r = netlink_message_append_in_addr_union(m, NHA_GATEWAY, nexthop->family, &nexthop->gw.address);
if (r < 0)
return r;
@ -722,7 +722,7 @@ static bool nexthop_is_ready_to_configure(Link *link, const NextHop *nexthop) {
return r;
}
return gateway_is_ready(link, FLAGS_SET(nexthop->flags, RTNH_F_ONLINK), nexthop->family, &nexthop->gw);
return gateway_is_ready(link, FLAGS_SET(nexthop->flags, RTNH_F_ONLINK), nexthop->family, &nexthop->gw.address);
}
static int nexthop_process_request(Request *req, Link *link, NextHop *nexthop) {
@ -1093,9 +1093,9 @@ int manager_rtnl_process_nexthop(sd_netlink *rtnl, sd_netlink_message *message,
(void) nexthop_update_group(nexthop, message);
if (nexthop->family != AF_UNSPEC) {
r = netlink_message_read_in_addr_union(message, NHA_GATEWAY, nexthop->family, &nexthop->gw);
r = netlink_message_read_in_addr_union(message, NHA_GATEWAY, nexthop->family, &nexthop->gw.address);
if (r == -ENODATA)
nexthop->gw = IN_ADDR_NULL;
nexthop->gw.address = IN_ADDR_NULL;
else if (r < 0)
log_debug_errno(r, "rtnl: could not get NHA_GATEWAY attribute, ignoring: %m");
}
@ -1129,66 +1129,60 @@ int manager_rtnl_process_nexthop(sd_netlink *rtnl, sd_netlink_message *message,
return 1;
}
#define log_nexthop_section(nexthop, fmt, ...) \
({ \
const NextHop *_nexthop = (nexthop); \
log_section_warning_errno( \
_nexthop ? _nexthop->section : NULL, \
SYNTHETIC_ERRNO(EINVAL), \
fmt " Ignoring [NextHop] section.", \
##__VA_ARGS__); \
})
static int nexthop_section_verify(NextHop *nh) {
if (section_is_invalid(nh->section))
return -EINVAL;
if (!nh->network->manager->manage_foreign_nexthops && nh->id == 0)
return log_warning_errno(SYNTHETIC_ERRNO(EINVAL),
"%s: [NextHop] section without specifying Id= is not supported "
"if ManageForeignNextHops=no is set in networkd.conf. "
"Ignoring [NextHop] section from line %u.",
nh->section->filename, nh->section->line);
return log_nexthop_section(nh, "Nexthop without specifying Id= is not supported if ManageForeignNextHops=no is set in networkd.conf.");
if (nh->family == AF_UNSPEC)
nh->family = nh->gw.family;
else if (nh->gw.family != AF_UNSPEC && nh->gw.family != nh->family)
return log_nexthop_section(nh, "Family= and Gateway= settings for nexthop contradict each other.");
assert(nh->gw.family == nh->family || nh->gw.family == AF_UNSPEC);
if (!hashmap_isempty(nh->group)) {
if (in_addr_is_set(nh->family, &nh->gw))
return log_warning_errno(SYNTHETIC_ERRNO(EINVAL),
"%s: nexthop group cannot have gateway address. "
"Ignoring [NextHop] section from line %u.",
nh->section->filename, nh->section->line);
if (in_addr_is_set(nh->family, &nh->gw.address))
return log_nexthop_section(nh, "Nexthop group cannot have gateway address.");
if (nh->family != AF_UNSPEC)
return log_warning_errno(SYNTHETIC_ERRNO(EINVAL),
"%s: nexthop group cannot have Family= setting. "
"Ignoring [NextHop] section from line %u.",
nh->section->filename, nh->section->line);
return log_nexthop_section(nh, "Nexthop group cannot have Family= setting.");
if (nh->blackhole)
return log_warning_errno(SYNTHETIC_ERRNO(EINVAL),
"%s: nexthop group cannot be a blackhole. "
"Ignoring [NextHop] section from line %u.",
nh->section->filename, nh->section->line);
return log_nexthop_section(nh, "Nexthop group cannot be a blackhole.");
if (nh->onlink > 0)
return log_warning_errno(SYNTHETIC_ERRNO(EINVAL),
"%s: nexthop group cannot have on-link flag. "
"Ignoring [NextHop] section from line %u.",
nh->section->filename, nh->section->line);
return log_nexthop_section(nh, "Nexthop group cannot have on-link flag.");
} else if (nh->family == AF_UNSPEC)
/* When neither Family=, Gateway=, nor Group= is specified, assume IPv4. */
nh->family = AF_INET;
if (nh->blackhole) {
if (in_addr_is_set(nh->family, &nh->gw))
return log_warning_errno(SYNTHETIC_ERRNO(EINVAL),
"%s: blackhole nexthop cannot have gateway address. "
"Ignoring [NextHop] section from line %u.",
nh->section->filename, nh->section->line);
if (in_addr_is_set(nh->family, &nh->gw.address))
return log_nexthop_section(nh, "Blackhole nexthop cannot have gateway address.");
if (nh->onlink > 0)
return log_warning_errno(SYNTHETIC_ERRNO(EINVAL),
"%s: blackhole nexthop cannot have on-link flag. "
"Ignoring [NextHop] section from line %u.",
nh->section->filename, nh->section->line);
return log_nexthop_section(nh, "Blackhole nexthop cannot have on-link flag.");
}
if (nh->onlink < 0 && in_addr_is_set(nh->family, &nh->gw) &&
if (nh->onlink < 0 && in_addr_is_set(nh->family, &nh->gw.address) &&
ordered_hashmap_isempty(nh->network->addresses_by_section)) {
/* If no address is configured, in most cases the gateway cannot be reachable.
* TODO: we may need to improve the condition above. */
log_warning("%s: Gateway= without static address configured. "
"Enabling OnLink= option.",
nh->section->filename);
log_section_warning(nh->section, "Nexthop with Gateway= specified, but no static address configured. Enabling OnLink= option.");
nh->onlink = true;
}
@ -1262,7 +1256,7 @@ int manager_build_nexthop_ids(Manager *manager) {
return 0;
}
int config_parse_nexthop_id(
static int config_parse_nexthop_family(
const char *unit,
const char *filename,
unsigned line,
@ -1274,261 +1268,38 @@ int config_parse_nexthop_id(
void *data,
void *userdata) {
_cleanup_(nexthop_unref_or_set_invalidp) NextHop *n = NULL;
Network *network = userdata;
uint32_t id;
int *family = ASSERT_PTR(data);
if (isempty(rvalue))
*family = AF_UNSPEC;
else if (streq(rvalue, "ipv4"))
*family = AF_INET;
else if (streq(rvalue, "ipv6"))
*family = AF_INET6;
else
return log_syntax_parse_error(unit, filename, line, 0, lvalue, rvalue);
return 1;
}
static int config_parse_nexthop_group(
const char *unit,
const char *filename,
unsigned line,
const char *section,
unsigned section_line,
const char *lvalue,
int ltype,
const char *rvalue,
void *data,
void *userdata) {
Hashmap **group = ASSERT_PTR(data);
int r;
assert(filename);
assert(section);
assert(lvalue);
assert(rvalue);
assert(data);
r = nexthop_new_static(network, filename, section_line, &n);
if (r < 0)
return log_oom();
if (isempty(rvalue)) {
n->id = 0;
TAKE_PTR(n);
return 0;
}
r = safe_atou32(rvalue, &id);
if (r < 0) {
log_syntax(unit, LOG_WARNING, filename, line, r,
"Could not parse nexthop id \"%s\", ignoring assignment: %m", rvalue);
return 0;
}
if (id == 0) {
log_syntax(unit, LOG_WARNING, filename, line, 0,
"Invalid nexthop id \"%s\", ignoring assignment: %m", rvalue);
return 0;
}
n->id = id;
TAKE_PTR(n);
return 0;
}
int config_parse_nexthop_gateway(
const char *unit,
const char *filename,
unsigned line,
const char *section,
unsigned section_line,
const char *lvalue,
int ltype,
const char *rvalue,
void *data,
void *userdata) {
_cleanup_(nexthop_unref_or_set_invalidp) NextHop *n = NULL;
Network *network = userdata;
int r;
assert(filename);
assert(section);
assert(lvalue);
assert(rvalue);
assert(data);
r = nexthop_new_static(network, filename, section_line, &n);
if (r < 0)
return log_oom();
if (isempty(rvalue)) {
n->family = AF_UNSPEC;
n->gw = IN_ADDR_NULL;
TAKE_PTR(n);
return 0;
}
r = in_addr_from_string_auto(rvalue, &n->family, &n->gw);
if (r < 0) {
log_syntax(unit, LOG_WARNING, filename, line, r,
"Invalid %s='%s', ignoring assignment: %m", lvalue, rvalue);
return 0;
}
TAKE_PTR(n);
return 0;
}
int config_parse_nexthop_family(
const char *unit,
const char *filename,
unsigned line,
const char *section,
unsigned section_line,
const char *lvalue,
int ltype,
const char *rvalue,
void *data,
void *userdata) {
_cleanup_(nexthop_unref_or_set_invalidp) NextHop *n = NULL;
Network *network = userdata;
AddressFamily a;
int r;
assert(filename);
assert(section);
assert(lvalue);
assert(rvalue);
assert(data);
r = nexthop_new_static(network, filename, section_line, &n);
if (r < 0)
return log_oom();
if (isempty(rvalue) &&
!in_addr_is_set(n->family, &n->gw)) {
/* Accept an empty string only when Gateway= is null or not specified. */
n->family = AF_UNSPEC;
TAKE_PTR(n);
return 0;
}
a = nexthop_address_family_from_string(rvalue);
if (a < 0) {
log_syntax(unit, LOG_WARNING, filename, line, 0,
"Invalid %s='%s', ignoring assignment: %m", lvalue, rvalue);
return 0;
}
if (in_addr_is_set(n->family, &n->gw) &&
((a == ADDRESS_FAMILY_IPV4 && n->family == AF_INET6) ||
(a == ADDRESS_FAMILY_IPV6 && n->family == AF_INET))) {
log_syntax(unit, LOG_WARNING, filename, line, 0,
"Specified family '%s' conflicts with the family of the previously specified Gateway=, "
"ignoring assignment.", rvalue);
return 0;
}
switch (a) {
case ADDRESS_FAMILY_IPV4:
n->family = AF_INET;
break;
case ADDRESS_FAMILY_IPV6:
n->family = AF_INET6;
break;
default:
assert_not_reached();
}
TAKE_PTR(n);
return 0;
}
int config_parse_nexthop_onlink(
const char *unit,
const char *filename,
unsigned line,
const char *section,
unsigned section_line,
const char *lvalue,
int ltype,
const char *rvalue,
void *data,
void *userdata) {
_cleanup_(nexthop_unref_or_set_invalidp) NextHop *n = NULL;
Network *network = userdata;
int r;
assert(filename);
assert(section);
assert(lvalue);
assert(rvalue);
assert(data);
r = nexthop_new_static(network, filename, section_line, &n);
if (r < 0)
return log_oom();
r = parse_tristate(rvalue, &n->onlink);
if (r < 0) {
log_syntax(unit, LOG_WARNING, filename, line, r,
"Failed to parse %s=, ignoring assignment: %s", lvalue, rvalue);
return 0;
}
TAKE_PTR(n);
return 0;
}
int config_parse_nexthop_blackhole(
const char *unit,
const char *filename,
unsigned line,
const char *section,
unsigned section_line,
const char *lvalue,
int ltype,
const char *rvalue,
void *data,
void *userdata) {
_cleanup_(nexthop_unref_or_set_invalidp) NextHop *n = NULL;
Network *network = userdata;
int r;
assert(filename);
assert(section);
assert(lvalue);
assert(rvalue);
assert(data);
r = nexthop_new_static(network, filename, section_line, &n);
if (r < 0)
return log_oom();
r = parse_boolean(rvalue);
if (r < 0) {
log_syntax(unit, LOG_WARNING, filename, line, r,
"Failed to parse %s=, ignoring assignment: %s", lvalue, rvalue);
return 0;
}
n->blackhole = r;
TAKE_PTR(n);
return 0;
}
int config_parse_nexthop_group(
const char *unit,
const char *filename,
unsigned line,
const char *section,
unsigned section_line,
const char *lvalue,
int ltype,
const char *rvalue,
void *data,
void *userdata) {
_cleanup_(nexthop_unref_or_set_invalidp) NextHop *n = NULL;
Network *network = userdata;
int r;
assert(filename);
assert(section);
assert(lvalue);
assert(rvalue);
assert(data);
r = nexthop_new_static(network, filename, section_line, &n);
if (r < 0)
return log_oom();
if (isempty(rvalue)) {
n->group = hashmap_free_free(n->group);
TAKE_PTR(n);
return 0;
*group = hashmap_free_free(*group);
return 1;
}
for (const char *p = rvalue;;) {
@ -1538,15 +1309,10 @@ int config_parse_nexthop_group(
char *sep;
r = extract_first_word(&p, &word, NULL, 0);
if (r == -ENOMEM)
return log_oom();
if (r < 0) {
log_syntax(unit, LOG_WARNING, filename, line, r,
"Invalid %s=, ignoring assignment: %s", lvalue, rvalue);
return 0;
}
if (r < 0)
return log_syntax_parse_error(unit, filename, line, r, lvalue, rvalue);
if (r == 0)
break;
return 1;
nhg = new0(struct nexthop_grp, 1);
if (!nhg)
@ -1586,7 +1352,7 @@ int config_parse_nexthop_group(
continue;
}
r = hashmap_ensure_put(&n->group, NULL, UINT32_TO_PTR(nhg->id), nhg);
r = hashmap_ensure_put(group, NULL, UINT32_TO_PTR(nhg->id), nhg);
if (r == -ENOMEM)
return log_oom();
if (r == -EEXIST) {
@ -1598,7 +1364,44 @@ int config_parse_nexthop_group(
assert(r > 0);
TAKE_PTR(nhg);
}
}
TAKE_PTR(n);
int config_parse_nexthop_section(
const char *unit,
const char *filename,
unsigned line,
const char *section,
unsigned section_line,
const char *lvalue,
int ltype,
const char *rvalue,
void *data,
void *userdata) {
static const ConfigSectionParser table[_NEXTHOP_CONF_PARSER_MAX] = {
[NEXTHOP_ID] = { .parser = config_parse_uint32, .ltype = 0, .offset = offsetof(NextHop, id), },
[NEXTHOP_GATEWAY] = { .parser = config_parse_in_addr_data, .ltype = 0, .offset = offsetof(NextHop, gw), },
[NEXTHOP_FAMILY] = { .parser = config_parse_nexthop_family, .ltype = 0, .offset = offsetof(NextHop, family), },
[NEXTHOP_ONLINK] = { .parser = config_parse_tristate, .ltype = 0, .offset = offsetof(NextHop, onlink), },
[NEXTHOP_BLACKHOLE] = { .parser = config_parse_bool, .ltype = 0, .offset = offsetof(NextHop, blackhole), },
[NEXTHOP_GROUP] = { .parser = config_parse_nexthop_group, .ltype = 0, .offset = offsetof(NextHop, group), },
};
_cleanup_(nexthop_unref_or_set_invalidp) NextHop *nexthop = NULL;
Network *network = ASSERT_PTR(userdata);
int r;
assert(filename);
r = nexthop_new_static(network, filename, section_line, &nexthop);
if (r < 0)
return log_oom();
r = config_section_parse(table, ELEMENTSOF(table),
unit, filename, line, section, section_line, lvalue, ltype, rvalue, nexthop);
if (r <= 0) /* 0 means non-critical error, but the section will be ignored. */
return r;
TAKE_PTR(nexthop);
return 0;
}

View File

@ -36,7 +36,7 @@ typedef struct NextHop {
Hashmap *group; /* NHA_GROUP */
bool blackhole; /* NHA_BLACKHOLE */
int ifindex; /* NHA_OIF */
union in_addr_union gw; /* NHA_GATEWAY */
struct in_addr_data gw; /* NHA_GATEWAY, gw.family is only used by conf parser. */
/* Only used in conf parser and nexthop_section_verify(). */
int onlink;
@ -71,9 +71,15 @@ int manager_build_nexthop_ids(Manager *manager);
DEFINE_NETWORK_CONFIG_STATE_FUNCTIONS(NextHop, nexthop);
CONFIG_PARSER_PROTOTYPE(config_parse_nexthop_id);
CONFIG_PARSER_PROTOTYPE(config_parse_nexthop_gateway);
CONFIG_PARSER_PROTOTYPE(config_parse_nexthop_family);
CONFIG_PARSER_PROTOTYPE(config_parse_nexthop_onlink);
CONFIG_PARSER_PROTOTYPE(config_parse_nexthop_blackhole);
CONFIG_PARSER_PROTOTYPE(config_parse_nexthop_group);
typedef enum NextHopConfParserType {
NEXTHOP_ID,
NEXTHOP_GATEWAY,
NEXTHOP_FAMILY,
NEXTHOP_ONLINK,
NEXTHOP_BLACKHOLE,
NEXTHOP_GROUP,
_NEXTHOP_CONF_PARSER_MAX,
_NEXTHOP_CONF_PARSER_INVALID = -EINVAL,
} NextHopConfParserType;
CONFIG_PARSER_PROTOTYPE(config_parse_nexthop_section);

View File

@ -1742,7 +1742,7 @@ static int oci_seccomp_args(const char *name, sd_json_variant *v, sd_json_dispat
int r;
JSON_VARIANT_ARRAY_FOREACH(e, v) {
static const struct sd_json_dispatch_field table[] = {
static const sd_json_dispatch_field table[] = {
{ "index", SD_JSON_VARIANT_UNSIGNED, sd_json_dispatch_uint32, offsetof(struct scmp_arg_cmp, arg), SD_JSON_MANDATORY },
{ "value", SD_JSON_VARIANT_UNSIGNED, sd_json_dispatch_uint64, offsetof(struct scmp_arg_cmp, datum_a), SD_JSON_MANDATORY },
{ "valueTwo", SD_JSON_VARIANT_UNSIGNED, sd_json_dispatch_uint64, offsetof(struct scmp_arg_cmp, datum_b), 0 },

View File

@ -369,7 +369,7 @@ static int run(int argc, char *argv[]) {
event = TPM2_EVENT_PHASE;
}
if (arg_graceful && tpm2_support() != TPM2_SUPPORT_FULL) {
if (arg_graceful && !tpm2_is_fully_supported()) {
log_notice("No complete TPM2 support detected, exiting gracefully.");
return EXIT_SUCCESS;
}

View File

@ -2876,55 +2876,76 @@ static int print_answer(sd_json_variant *answer) {
return 0;
}
typedef struct MonitorQueryParams {
sd_json_variant *question;
sd_json_variant *answer;
sd_json_variant *collected_questions;
int rcode;
int error;
int ede_code;
const char *state;
const char *result;
const char *ede_msg;
} MonitorQueryParams;
static void monitor_query_params_done(MonitorQueryParams *p) {
assert(p);
sd_json_variant_unref(p->question);
sd_json_variant_unref(p->answer);
sd_json_variant_unref(p->collected_questions);
}
static void monitor_query_dump(sd_json_variant *v) {
_cleanup_(sd_json_variant_unrefp) sd_json_variant *question = NULL, *answer = NULL, *collected_questions = NULL;
int rcode = -1, error = 0, ede_code = -1;
const char *state = NULL, *result = NULL, *ede_msg = NULL;
assert(v);
sd_json_dispatch_field dispatch_table[] = {
{ "question", SD_JSON_VARIANT_ARRAY, sd_json_dispatch_variant, PTR_TO_SIZE(&question), SD_JSON_MANDATORY },
{ "answer", SD_JSON_VARIANT_ARRAY, sd_json_dispatch_variant, PTR_TO_SIZE(&answer), 0 },
{ "collectedQuestions", SD_JSON_VARIANT_ARRAY, sd_json_dispatch_variant, PTR_TO_SIZE(&collected_questions), 0 },
{ "state", SD_JSON_VARIANT_STRING, sd_json_dispatch_const_string, PTR_TO_SIZE(&state), SD_JSON_MANDATORY },
{ "result", SD_JSON_VARIANT_STRING, sd_json_dispatch_const_string, PTR_TO_SIZE(&result), 0 },
{ "rcode", _SD_JSON_VARIANT_TYPE_INVALID, sd_json_dispatch_int, PTR_TO_SIZE(&rcode), 0 },
{ "errno", _SD_JSON_VARIANT_TYPE_INVALID, sd_json_dispatch_int, PTR_TO_SIZE(&error), 0 },
{ "extendedDNSErrorCode", _SD_JSON_VARIANT_TYPE_INVALID, sd_json_dispatch_int, PTR_TO_SIZE(&ede_code), 0 },
{ "extendedDNSErrorMessage", SD_JSON_VARIANT_STRING, sd_json_dispatch_const_string, PTR_TO_SIZE(&ede_msg), 0 },
static const sd_json_dispatch_field dispatch_table[] = {
{ "question", SD_JSON_VARIANT_ARRAY, sd_json_dispatch_variant, offsetof(MonitorQueryParams, question), SD_JSON_MANDATORY },
{ "answer", SD_JSON_VARIANT_ARRAY, sd_json_dispatch_variant, offsetof(MonitorQueryParams, answer), 0 },
{ "collectedQuestions", SD_JSON_VARIANT_ARRAY, sd_json_dispatch_variant, offsetof(MonitorQueryParams, collected_questions), 0 },
{ "state", SD_JSON_VARIANT_STRING, sd_json_dispatch_const_string, offsetof(MonitorQueryParams, state), SD_JSON_MANDATORY },
{ "result", SD_JSON_VARIANT_STRING, sd_json_dispatch_const_string, offsetof(MonitorQueryParams, result), 0 },
{ "rcode", _SD_JSON_VARIANT_TYPE_INVALID, sd_json_dispatch_int, offsetof(MonitorQueryParams, rcode), 0 },
{ "errno", _SD_JSON_VARIANT_TYPE_INVALID, sd_json_dispatch_int, offsetof(MonitorQueryParams, error), 0 },
{ "extendedDNSErrorCode", _SD_JSON_VARIANT_TYPE_INVALID, sd_json_dispatch_int, offsetof(MonitorQueryParams, ede_code), 0 },
{ "extendedDNSErrorMessage", SD_JSON_VARIANT_STRING, sd_json_dispatch_const_string, offsetof(MonitorQueryParams, ede_msg), 0 },
{}
};
if (sd_json_dispatch(v, dispatch_table, SD_JSON_LOG|SD_JSON_ALLOW_EXTENSIONS, NULL) < 0)
_cleanup_(monitor_query_params_done) MonitorQueryParams p = {
.rcode = -1,
.ede_code = -1,
};
assert(v);
if (sd_json_dispatch(v, dispatch_table, SD_JSON_LOG|SD_JSON_ALLOW_EXTENSIONS, &p) < 0)
return;
/* First show the current question */
print_question('Q', ansi_highlight_cyan(), question);
print_question('Q', ansi_highlight_cyan(), p.question);
/* And then show the questions that led to this one in case this was a CNAME chain */
print_question('C', ansi_highlight_grey(), collected_questions);
print_question('C', ansi_highlight_grey(), p.collected_questions);
printf("%s%s S%s: %s",
streq_ptr(state, "success") ? ansi_highlight_green() : ansi_highlight_red(),
streq_ptr(p.state, "success") ? ansi_highlight_green() : ansi_highlight_red(),
special_glyph(SPECIAL_GLYPH_ARROW_LEFT),
ansi_normal(),
strna(streq_ptr(state, "errno") ? errno_to_name(error) :
streq_ptr(state, "rcode-failure") ? dns_rcode_to_string(rcode) :
state));
strna(streq_ptr(p.state, "errno") ? errno_to_name(p.error) :
streq_ptr(p.state, "rcode-failure") ? dns_rcode_to_string(p.rcode) :
p.state));
if (!isempty(result))
printf(": %s", result);
if (!isempty(p.result))
printf(": %s", p.result);
if (ede_code >= 0)
if (p.ede_code >= 0)
printf(" (%s%s%s)",
FORMAT_DNS_EDE_RCODE(ede_code),
!isempty(ede_msg) ? ": " : "",
strempty(ede_msg));
FORMAT_DNS_EDE_RCODE(p.ede_code),
!isempty(p.ede_msg) ? ": " : "",
strempty(p.ede_msg));
puts("");
print_answer(answer);
print_answer(p.answer);
}
static int monitor_reply(

View File

@ -2145,26 +2145,31 @@ int dns_resource_key_to_json(DnsResourceKey *key, sd_json_variant **ret) {
}
int dns_resource_key_from_json(sd_json_variant *v, DnsResourceKey **ret) {
_cleanup_(dns_resource_key_unrefp) DnsResourceKey *key = NULL;
uint16_t type = 0, class = 0;
const char *name = NULL;
int r;
struct params {
uint16_t type;
uint16_t class;
const char *name;
};
sd_json_dispatch_field dispatch_table[] = {
{ "class", _SD_JSON_VARIANT_TYPE_INVALID, sd_json_dispatch_uint16, PTR_TO_SIZE(&class), SD_JSON_MANDATORY },
{ "type", _SD_JSON_VARIANT_TYPE_INVALID, sd_json_dispatch_uint16, PTR_TO_SIZE(&type), SD_JSON_MANDATORY },
{ "name", SD_JSON_VARIANT_STRING, sd_json_dispatch_const_string, PTR_TO_SIZE(&name), SD_JSON_MANDATORY },
static const sd_json_dispatch_field dispatch_table[] = {
{ "class", _SD_JSON_VARIANT_TYPE_INVALID, sd_json_dispatch_uint16, offsetof(struct params, class), SD_JSON_MANDATORY },
{ "type", _SD_JSON_VARIANT_TYPE_INVALID, sd_json_dispatch_uint16, offsetof(struct params, type), SD_JSON_MANDATORY },
{ "name", SD_JSON_VARIANT_STRING, sd_json_dispatch_const_string, offsetof(struct params, name), SD_JSON_MANDATORY },
{}
};
_cleanup_(dns_resource_key_unrefp) DnsResourceKey *key = NULL;
struct params p;
int r;
assert(v);
assert(ret);
r = sd_json_dispatch(v, dispatch_table, 0, NULL);
r = sd_json_dispatch(v, dispatch_table, 0, &p);
if (r < 0)
return r;
key = dns_resource_key_new(class, type, name);
key = dns_resource_key_new(p.class, p.type, p.name);
if (!key)
return -ENOMEM;

View File

@ -667,7 +667,7 @@ static int has_tpm2(void) {
*
* Note that we don't check if we ourselves are built with TPM2 support here! */
return FLAGS_SET(tpm2_support(), TPM2_SUPPORT_SUBSYSTEM|TPM2_SUPPORT_FIRMWARE);
return FLAGS_SET(tpm2_support_full(TPM2_SUPPORT_SUBSYSTEM|TPM2_SUPPORT_FIRMWARE), TPM2_SUPPORT_SUBSYSTEM|TPM2_SUPPORT_FIRMWARE);
}
static int condition_test_security(Condition *c, char **env) {

View File

@ -1764,7 +1764,7 @@ int config_parse_hw_addr(
void *data,
void *userdata) {
struct hw_addr_data a, *hwaddr = ASSERT_PTR(data);
struct hw_addr_data *hwaddr = ASSERT_PTR(data);
int r;
assert(filename);
@ -1776,11 +1776,10 @@ int config_parse_hw_addr(
return 1;
}
r = parse_hw_addr_full(rvalue, ltype, &a);
r = parse_hw_addr_full(rvalue, ltype, hwaddr);
if (r < 0)
return log_syntax_parse_error(unit, filename, line, r, lvalue, rvalue);
*hwaddr = a;
return 1;
}
@ -1973,6 +1972,36 @@ int config_parse_in_addr_non_null(
return 1;
}
int config_parse_in_addr_data(
const char *unit,
const char *filename,
unsigned line,
const char *section,
unsigned section_line,
const char *lvalue,
int ltype,
const char *rvalue,
void *data,
void *userdata) {
struct in_addr_data *p = ASSERT_PTR(data);
int r;
assert(filename);
assert(lvalue);
if (isempty(rvalue)) {
*p = (struct in_addr_data) {};
return 1;
}
r = in_addr_from_string_auto(rvalue, &p->family, &p->address);
if (r < 0)
return log_syntax_parse_error(unit, filename, line, r, lvalue, rvalue);
return 1;
}
int config_parse_unsigned_bounded(
const char *unit,
const char *filename,

View File

@ -304,6 +304,7 @@ CONFIG_PARSER_PROTOTYPE(config_parse_hw_addrs);
CONFIG_PARSER_PROTOTYPE(config_parse_ether_addr);
CONFIG_PARSER_PROTOTYPE(config_parse_ether_addrs);
CONFIG_PARSER_PROTOTYPE(config_parse_in_addr_non_null);
CONFIG_PARSER_PROTOTYPE(config_parse_in_addr_data);
CONFIG_PARSER_PROTOTYPE(config_parse_percent);
CONFIG_PARSER_PROTOTYPE(config_parse_permyriad);
CONFIG_PARSER_PROTOTYPE(config_parse_pid);

View File

@ -886,7 +886,7 @@ int encrypt_credential_and_warn(
* container tpm2_support will detect this, and will return a different flag combination of
* TPM2_SUPPORT_FULL, effectively skipping the use of TPM2 when inside one. */
try_tpm2 = tpm2_support() == TPM2_SUPPORT_FULL;
try_tpm2 = tpm2_is_fully_supported();
if (!try_tpm2)
log_debug("System lacks TPM2 support or running in a container, not attempting to use TPM2.");
} else
@ -1582,14 +1582,12 @@ int ipc_encrypt_credential(const char *name, usec_t timestamp, usec_t not_after,
return log_error_errno(sd_varlink_error_to_errno(error_id, reply), "Failed to encrypt: %s", error_id);
}
r = sd_json_dispatch(
reply,
(const sd_json_dispatch_field[]) {
{ "blob", SD_JSON_VARIANT_STRING, json_dispatch_unbase64_iovec, PTR_TO_SIZE(ret), SD_JSON_MANDATORY },
{},
},
SD_JSON_LOG|SD_JSON_ALLOW_EXTENSIONS,
/* userdata= */ NULL);
static const sd_json_dispatch_field dispatch_table[] = {
{ "blob", SD_JSON_VARIANT_STRING, json_dispatch_unbase64_iovec, 0, SD_JSON_MANDATORY },
{},
};
r = sd_json_dispatch(reply, dispatch_table, SD_JSON_LOG|SD_JSON_ALLOW_EXTENSIONS, ret);
if (r < 0)
return r;
@ -1649,14 +1647,12 @@ int ipc_decrypt_credential(const char *validate_name, usec_t validate_timestamp,
return log_error_errno(sd_varlink_error_to_errno(error_id, reply), "Failed to decrypt: %s", error_id);
}
r = sd_json_dispatch(
reply,
(const sd_json_dispatch_field[]) {
{ "data", SD_JSON_VARIANT_STRING, json_dispatch_unbase64_iovec, PTR_TO_SIZE(ret), SD_JSON_MANDATORY },
{},
},
SD_JSON_LOG|SD_JSON_ALLOW_EXTENSIONS,
/* userdata= */ NULL);
static const sd_json_dispatch_field dispatch_table[] = {
{ "data", SD_JSON_VARIANT_STRING, json_dispatch_unbase64_iovec, 0, SD_JSON_MANDATORY },
{},
};
r = sd_json_dispatch(reply, dispatch_table, SD_JSON_LOG|SD_JSON_ALLOW_EXTENSIONS, ret);
if (r < 0)
return r;

View File

@ -250,6 +250,18 @@ int nsresource_add_cgroup(int userns_fd, int cgroup_fd) {
return 1;
}
typedef struct InterfaceParams {
char *host_interface_name;
char *namespace_interface_name;
} InterfaceParams;
static void interface_params_done(InterfaceParams *p) {
assert(p);
free(p->host_interface_name);
free(p->namespace_interface_name);
}
int nsresource_add_netif(
int userns_fd,
int netns_fd,
@ -313,20 +325,20 @@ int nsresource_add_netif(
if (error_id)
return log_debug_errno(sd_varlink_error_to_errno(error_id, reply), "Failed to add network to user namespace: %s", error_id);
_cleanup_free_ char *host_interface_name = NULL, *namespace_interface_name = NULL;
r = sd_json_dispatch(
reply,
(const sd_json_dispatch_field[]) {
{ "hostInterfaceName", SD_JSON_VARIANT_STRING, sd_json_dispatch_string, PTR_TO_SIZE(&host_interface_name) },
{ "namespaceInterfaceName", SD_JSON_VARIANT_STRING, sd_json_dispatch_string, PTR_TO_SIZE(&namespace_interface_name) },
},
SD_JSON_ALLOW_EXTENSIONS,
/* userdata= */ NULL);
static const sd_json_dispatch_field dispatch_table[] = {
{ "hostInterfaceName", SD_JSON_VARIANT_STRING, sd_json_dispatch_string, offsetof(InterfaceParams, host_interface_name), 0 },
{ "namespaceInterfaceName", SD_JSON_VARIANT_STRING, sd_json_dispatch_string, offsetof(InterfaceParams, namespace_interface_name), 0 },
};
_cleanup_(interface_params_done) InterfaceParams p = {};
r = sd_json_dispatch(reply, dispatch_table, SD_JSON_ALLOW_EXTENSIONS, &p);
if (r < 0)
return r;
if (ret_host_ifname)
*ret_host_ifname = TAKE_PTR(host_interface_name);
*ret_host_ifname = TAKE_PTR(p.host_interface_name);
if (ret_namespace_ifname)
*ret_namespace_ifname = TAKE_PTR(namespace_interface_name);
*ret_namespace_ifname = TAKE_PTR(p.namespace_interface_name);
return 1;
}

View File

@ -281,6 +281,44 @@ static inline int run_test_table(void) {
} \
})
#define ASSERT_OK_ZERO_ERRNO(expr) \
({ \
typeof(expr) _result = (expr); \
if (_result < 0) { \
log_error_errno(errno, "%s:%i: Assertion failed: expected \"%s\" to succeed but got the following error: %m", \
PROJECT_FILE, __LINE__, #expr); \
abort(); \
} \
if (_result != 0) { \
char _sexpr[DECIMAL_STR_MAX(typeof(expr))]; \
xsprintf(_sexpr, DECIMAL_STR_FMT(_result), _result); \
log_error("%s:%i: Assertion failed: expected \"%s\" to be zero, but it is %s.", \
PROJECT_FILE, __LINE__, #expr, _sexpr); \
abort(); \
} \
})
#define ASSERT_OK_EQ_ERRNO(expr1, expr2) \
({ \
typeof(expr1) _expr1 = (expr1); \
typeof(expr2) _expr2 = (expr2); \
if (_expr1 < 0) { \
log_error_errno(errno, "%s:%i: Assertion failed: expected \"%s\" to succeed but got the following error: %m", \
PROJECT_FILE, __LINE__, #expr1); \
abort(); \
} \
if (_expr1 != _expr2) { \
char _sexpr1[DECIMAL_STR_MAX(typeof(expr1))]; \
char _sexpr2[DECIMAL_STR_MAX(typeof(expr2))]; \
xsprintf(_sexpr1, DECIMAL_STR_FMT(_expr1), _expr1); \
xsprintf(_sexpr2, DECIMAL_STR_FMT(_expr2), _expr2); \
log_error("%s:%i: Assertion failed: expected \"%s == %s\", but %s != %s", \
PROJECT_FILE, __LINE__, #expr1, #expr2, _sexpr1, _sexpr2); \
abort(); \
} \
})
#define ASSERT_FAIL(expr) \
({ \
typeof(expr) _result = (expr); \

View File

@ -3,6 +3,7 @@
#include <sys/file.h>
#include "alloc-util.h"
#include "ansi-color.h"
#include "constants.h"
#include "creds-util.h"
#include "cryptsetup-util.h"
@ -7872,11 +7873,11 @@ int tpm2_sym_mode_from_string(const char *mode) {
return log_debug_errno(SYNTHETIC_ERRNO(EINVAL), "Unknown symmetric mode name '%s'", mode);
}
Tpm2Support tpm2_support(void) {
Tpm2Support tpm2_support_full(Tpm2Support mask) {
Tpm2Support support = TPM2_SUPPORT_NONE;
int r;
if (detect_container() <= 0) {
if (((mask & (TPM2_SUPPORT_SUBSYSTEM|TPM2_SUPPORT_DRIVER)) != 0) && detect_container() <= 0) {
/* Check if there's a /dev/tpmrm* device via sysfs. If we run in a container we likely just
* got the host sysfs mounted. Since devices are generally not virtualized for containers,
* let's assume containers never have a TPM, at least for now. */
@ -7893,18 +7894,24 @@ Tpm2Support tpm2_support(void) {
support |= TPM2_SUPPORT_SUBSYSTEM;
}
if (efi_has_tpm2())
if (FLAGS_SET(mask, TPM2_SUPPORT_FIRMWARE) && efi_has_tpm2())
support |= TPM2_SUPPORT_FIRMWARE;
#if HAVE_TPM2
support |= TPM2_SUPPORT_SYSTEM;
r = dlopen_tpm2();
if (r >= 0)
support |= TPM2_SUPPORT_LIBRARIES;
if (FLAGS_SET(mask, TPM2_SUPPORT_LIBRARIES)) {
r = dlopen_tpm2();
if (r >= 0)
support |= TPM2_SUPPORT_LIBRARIES;
}
#endif
return support;
return support & mask;
}
static void print_field(const char *s, bool supported) {
printf("%s%s%s%s\n", supported ? ansi_green() : ansi_red(), plus_minus(supported), s, ansi_normal());
}
int verb_has_tpm2_generic(bool quiet) {
@ -7914,22 +7921,17 @@ int verb_has_tpm2_generic(bool quiet) {
if (!quiet) {
if (s == TPM2_SUPPORT_FULL)
puts("yes");
printf("%syes%s\n", ansi_green(), ansi_normal());
else if (s == TPM2_SUPPORT_NONE)
puts("no");
printf("%sno%s\n", ansi_red(), ansi_normal());
else
puts("partial");
printf("%spartial%s\n", ansi_yellow(), ansi_normal());
printf("%sfirmware\n"
"%sdriver\n"
"%ssystem\n"
"%ssubsystem\n"
"%slibraries\n",
plus_minus(s & TPM2_SUPPORT_FIRMWARE),
plus_minus(s & TPM2_SUPPORT_DRIVER),
plus_minus(s & TPM2_SUPPORT_SYSTEM),
plus_minus(s & TPM2_SUPPORT_SUBSYSTEM),
plus_minus(s & TPM2_SUPPORT_LIBRARIES));
print_field("firmware", FLAGS_SET(s, TPM2_SUPPORT_FIRMWARE));
print_field("driver", FLAGS_SET(s, TPM2_SUPPORT_DRIVER));
print_field("system", FLAGS_SET(s, TPM2_SUPPORT_SYSTEM));
print_field("subsystem", FLAGS_SET(s, TPM2_SUPPORT_SUBSYSTEM));
print_field("libraries", FLAGS_SET(s, TPM2_SUPPORT_LIBRARIES));
}
/* Return inverted bit flags. So that TPM2_SUPPORT_FULL becomes EXIT_SUCCESS and the other values

View File

@ -450,8 +450,8 @@ typedef struct {
} systemd_tpm2_plugin_params;
typedef enum Tpm2Support {
/* NOTE! The systemd-creds tool returns these flags 1:1 as exit status. Hence these flags are pretty
* much ABI! Hence, be extra careful when changing/extending these definitions. */
/* NOTE! The systemd-analyze has-tpm2 command returns these flags 1:1 as exit status. Hence these
* flags are pretty much ABI! Hence, be extra careful when changing/extending these definitions. */
TPM2_SUPPORT_NONE = 0, /* no support */
TPM2_SUPPORT_FIRMWARE = 1 << 0, /* firmware reports TPM2 was used */
TPM2_SUPPORT_DRIVER = 1 << 1, /* the kernel has a driver loaded for it */
@ -461,7 +461,13 @@ typedef enum Tpm2Support {
TPM2_SUPPORT_FULL = TPM2_SUPPORT_FIRMWARE|TPM2_SUPPORT_DRIVER|TPM2_SUPPORT_SYSTEM|TPM2_SUPPORT_SUBSYSTEM|TPM2_SUPPORT_LIBRARIES,
} Tpm2Support;
Tpm2Support tpm2_support(void);
Tpm2Support tpm2_support_full(Tpm2Support mask);
static inline Tpm2Support tpm2_support(void) {
return tpm2_support_full(TPM2_SUPPORT_FULL);
}
static inline bool tpm2_is_fully_supported(void) {
return tpm2_support() == TPM2_SUPPORT_FULL;
}
int verb_has_tpm2_generic(bool quiet);

View File

@ -161,12 +161,12 @@ static int process_machine(const char *machine, const char *port) {
uint32_t cid = VMADDR_CID_ANY;
const sd_json_dispatch_field dispatch_table[] = {
{ "vSockCid", SD_JSON_VARIANT_UNSIGNED, sd_json_dispatch_uint32, PTR_TO_SIZE(&cid), 0 },
static const sd_json_dispatch_field dispatch_table[] = {
{ "vSockCid", SD_JSON_VARIANT_UNSIGNED, sd_json_dispatch_uint32, 0, 0 },
{}
};
r = sd_json_dispatch(result, dispatch_table, SD_JSON_ALLOW_EXTENSIONS, NULL);
r = sd_json_dispatch(result, dispatch_table, SD_JSON_ALLOW_EXTENSIONS, &cid);
if (r < 0)
return log_error_errno(r, "Failed to parse Varlink reply: %m");

View File

@ -321,12 +321,27 @@ static int list_targets(sd_bus *bus) {
return table_print_with_pager(table, SD_JSON_FORMAT_OFF, arg_pager_flags, arg_legend);
}
typedef struct DescribeParams {
Version v;
sd_json_variant *contents_json;
bool newest;
bool available;
bool installed;
bool obsolete;
bool protected;
bool incomplete;
} DescribeParams;
static void describe_params_done(DescribeParams *p) {
assert(p);
version_done(&p->v);
sd_json_variant_unref(p->contents_json);
}
static int parse_describe(sd_bus_message *reply, Version *ret) {
Version v = {};
char *version_json = NULL;
_cleanup_(sd_json_variant_unrefp) sd_json_variant *json = NULL, *contents_json = NULL;
bool newest = false, available = false, installed = false, obsolete = false, protected = false,
incomplete = false;
_cleanup_(sd_json_variant_unrefp) sd_json_variant *json = NULL;
int r;
assert(reply);
@ -342,36 +357,37 @@ static int parse_describe(sd_bus_message *reply, Version *ret) {
assert(sd_json_variant_is_object(json));
r = sd_json_dispatch(json,
(const sd_json_dispatch_field[]) {
{ "version", SD_JSON_VARIANT_STRING, sd_json_dispatch_string, PTR_TO_SIZE(&v.version), 0 },
{ "newest", SD_JSON_VARIANT_BOOLEAN, sd_json_dispatch_stdbool, PTR_TO_SIZE(&newest), 0 },
{ "available", SD_JSON_VARIANT_BOOLEAN, sd_json_dispatch_stdbool, PTR_TO_SIZE(&available), 0 },
{ "installed", SD_JSON_VARIANT_BOOLEAN, sd_json_dispatch_stdbool, PTR_TO_SIZE(&installed), 0 },
{ "obsolete", SD_JSON_VARIANT_BOOLEAN, sd_json_dispatch_stdbool, PTR_TO_SIZE(&obsolete), 0 },
{ "protected", SD_JSON_VARIANT_BOOLEAN, sd_json_dispatch_stdbool, PTR_TO_SIZE(&protected), 0 },
{ "incomplete", SD_JSON_VARIANT_BOOLEAN, sd_json_dispatch_stdbool, PTR_TO_SIZE(&incomplete), 0 },
{ "changelog_urls", SD_JSON_VARIANT_ARRAY, sd_json_dispatch_strv, PTR_TO_SIZE(&v.changelog), 0 },
{ "contents", SD_JSON_VARIANT_ARRAY, sd_json_dispatch_variant, PTR_TO_SIZE(&contents_json), 0 },
{},
},
SD_JSON_ALLOW_EXTENSIONS,
/* userdata= */ NULL);
static const sd_json_dispatch_field dispatch_table[] = {
{ "version", SD_JSON_VARIANT_STRING, sd_json_dispatch_string, offsetof(DescribeParams, v.version), 0 },
{ "newest", SD_JSON_VARIANT_BOOLEAN, sd_json_dispatch_stdbool, offsetof(DescribeParams, newest), 0 },
{ "available", SD_JSON_VARIANT_BOOLEAN, sd_json_dispatch_stdbool, offsetof(DescribeParams, available), 0 },
{ "installed", SD_JSON_VARIANT_BOOLEAN, sd_json_dispatch_stdbool, offsetof(DescribeParams, installed), 0 },
{ "obsolete", SD_JSON_VARIANT_BOOLEAN, sd_json_dispatch_stdbool, offsetof(DescribeParams, obsolete), 0 },
{ "protected", SD_JSON_VARIANT_BOOLEAN, sd_json_dispatch_stdbool, offsetof(DescribeParams, protected), 0 },
{ "incomplete", SD_JSON_VARIANT_BOOLEAN, sd_json_dispatch_stdbool, offsetof(DescribeParams, incomplete), 0 },
{ "changelog_urls", SD_JSON_VARIANT_ARRAY, sd_json_dispatch_strv, offsetof(DescribeParams, v.changelog), 0 },
{ "contents", SD_JSON_VARIANT_ARRAY, sd_json_dispatch_variant, offsetof(DescribeParams, contents_json), 0 },
{},
};
_cleanup_(describe_params_done) DescribeParams p = {};
r = sd_json_dispatch(json, dispatch_table, SD_JSON_ALLOW_EXTENSIONS, &p);
if (r < 0)
return log_error_errno(r, "Failed to parse JSON: %m");
SET_FLAG(v.flags, UPDATE_NEWEST, newest);
SET_FLAG(v.flags, UPDATE_AVAILABLE, available);
SET_FLAG(v.flags, UPDATE_INSTALLED, installed);
SET_FLAG(v.flags, UPDATE_OBSOLETE, obsolete);
SET_FLAG(v.flags, UPDATE_PROTECTED, protected);
SET_FLAG(v.flags, UPDATE_INCOMPLETE, incomplete);
SET_FLAG(p.v.flags, UPDATE_NEWEST, p.newest);
SET_FLAG(p.v.flags, UPDATE_AVAILABLE, p.available);
SET_FLAG(p.v.flags, UPDATE_INSTALLED, p.installed);
SET_FLAG(p.v.flags, UPDATE_OBSOLETE, p.obsolete);
SET_FLAG(p.v.flags, UPDATE_PROTECTED, p.protected);
SET_FLAG(p.v.flags, UPDATE_INCOMPLETE, p.incomplete);
r = sd_json_variant_format(contents_json, 0, &v.contents_json);
r = sd_json_variant_format(p.contents_json, 0, &p.v.contents_json);
if (r < 0)
return log_error_errno(r, "Failed to format JSON for contents: %m");
*ret = TAKE_STRUCT(v);
*ret = TAKE_STRUCT(p.v);
return 0;
}

View File

@ -1141,6 +1141,18 @@ TEST(ASSERT) {
ASSERT_SIGNAL(ASSERT_OK_ERRNO(-1), SIGABRT);
ASSERT_SIGNAL(ASSERT_OK_ERRNO(-ENOANO), SIGABRT);
ASSERT_OK_ZERO_ERRNO(0);
ASSERT_SIGNAL(ASSERT_OK_ZERO_ERRNO(1), SIGABRT);
ASSERT_SIGNAL(ASSERT_OK_ZERO_ERRNO(255), SIGABRT);
ASSERT_SIGNAL(ASSERT_OK_ZERO_ERRNO(-1), SIGABRT);
ASSERT_SIGNAL(ASSERT_OK_ZERO_ERRNO(-ENOANO), SIGABRT);
ASSERT_OK_EQ_ERRNO(0, 0);
ASSERT_SIGNAL(ASSERT_OK_EQ_ERRNO(1, 0), SIGABRT);
ASSERT_SIGNAL(ASSERT_OK_EQ_ERRNO(255, 5), SIGABRT);
ASSERT_SIGNAL(ASSERT_OK_EQ_ERRNO(-1, 0), SIGABRT);
ASSERT_SIGNAL(ASSERT_OK_EQ_ERRNO(-ENOANO, 0), SIGABRT);
ASSERT_FAIL(-ENOENT);
ASSERT_FAIL(-EPERM);
ASSERT_SIGNAL(ASSERT_FAIL(0), SIGABRT);

View File

@ -54,46 +54,51 @@ static void test_pid_get_comm_one(pid_t pid) {
xsprintf(path, "/proc/"PID_FMT"/comm", pid);
if (stat(path, &st) == 0) {
assert_se(pid_get_comm(pid, &a) >= 0);
ASSERT_OK(pid_get_comm(pid, &a));
log_info("PID"PID_FMT" comm: '%s'", pid, a);
} else
log_warning("%s not exist.", path);
assert_se(pid_get_cmdline(pid, 0, PROCESS_CMDLINE_COMM_FALLBACK, &c) >= 0);
ASSERT_OK(pid_get_cmdline(pid, 0, PROCESS_CMDLINE_COMM_FALLBACK, &c));
log_info("PID"PID_FMT" cmdline: '%s'", pid, c);
assert_se(pid_get_cmdline(pid, 8, 0, &d) >= 0);
ASSERT_OK(pid_get_cmdline(pid, 8, 0, &d));
log_info("PID"PID_FMT" cmdline truncated to 8: '%s'", pid, d);
free(d);
assert_se(pid_get_cmdline(pid, 1, 0, &d) >= 0);
ASSERT_OK(pid_get_cmdline(pid, 1, 0, &d));
log_info("PID"PID_FMT" cmdline truncated to 1: '%s'", pid, d);
r = get_process_ppid(pid, &e);
assert_se(pid == 1 ? r == -EADDRNOTAVAIL : r >= 0);
if (pid == 1)
ASSERT_ERROR(r, EADDRNOTAVAIL);
else
ASSERT_OK(r);
if (r >= 0) {
log_info("PID"PID_FMT" PPID: "PID_FMT, pid, e);
assert_se(e > 0);
ASSERT_GT(e, 0);
}
assert_se(pid_is_kernel_thread(pid) == 0 || pid != 1);
ASSERT_TRUE(pid_is_kernel_thread(pid) == 0 || pid != 1);
r = get_process_exe(pid, &f);
assert_se(r >= 0 || r == -EACCES);
if (r != -EACCES)
ASSERT_OK(r);
log_info("PID"PID_FMT" exe: '%s'", pid, strna(f));
assert_se(pid_get_uid(pid, &u) == 0);
ASSERT_OK_ZERO(pid_get_uid(pid, &u));
log_info("PID"PID_FMT" UID: "UID_FMT, pid, u);
assert_se(get_process_gid(pid, &g) == 0);
ASSERT_OK_ZERO(get_process_gid(pid, &g));
log_info("PID"PID_FMT" GID: "GID_FMT, pid, g);
r = get_process_environ(pid, &env);
assert_se(r >= 0 || r == -EACCES);
if (r != -EACCES)
ASSERT_OK(r);
log_info("PID"PID_FMT" strlen(environ): %zi", pid, env ? (ssize_t)strlen(env) : (ssize_t)-errno);
if (!detect_container())
assert_se(get_ctty_devnr(pid, &h) == -ENXIO || pid != 1);
if (!detect_container() && pid == 1)
ASSERT_ERROR(get_ctty_devnr(pid, &h), ENXIO);
(void) getenv_for_pid(pid, "PATH", &i);
log_info("PID"PID_FMT" $PATH: '%s'", pid, strna(i));
@ -136,14 +141,14 @@ static void test_pid_get_cmdline_one(pid_t pid) {
r = pid_get_cmdline_strv(pid, 0, &strv_a);
if (r >= 0)
assert_se(joined = strv_join(strv_a, "\", \""));
ASSERT_NOT_NULL(joined = strv_join(strv_a, "\", \""));
log_info(" \"%s\"", r >= 0 ? joined : errno_to_name(r));
joined = mfree(joined);
r = pid_get_cmdline_strv(pid, PROCESS_CMDLINE_COMM_FALLBACK, &strv_b);
if (r >= 0)
assert_se(joined = strv_join(strv_b, "\", \""));
ASSERT_NOT_NULL(joined = strv_join(strv_b, "\", \""));
log_info(" \"%s\"", r >= 0 ? joined : errno_to_name(r));
}
@ -151,13 +156,13 @@ TEST(pid_get_cmdline) {
_cleanup_closedir_ DIR *d = NULL;
int r;
assert_se(proc_dir_open(&d) >= 0);
ASSERT_OK(proc_dir_open(&d));
for (;;) {
pid_t pid;
r = proc_dir_read(d, &pid);
assert_se(r >= 0);
ASSERT_OK(r);
if (r == 0) /* EOF */
break;
@ -171,8 +176,8 @@ static void test_pid_get_comm_escape_one(const char *input, const char *output)
log_debug("input: <%s> — output: <%s>", input, output);
assert_se(prctl(PR_SET_NAME, input) >= 0);
assert_se(pid_get_comm(0, &n) >= 0);
ASSERT_OK_ERRNO(prctl(PR_SET_NAME, input));
ASSERT_OK(pid_get_comm(0, &n));
log_debug("got: <%s>", n);
@ -182,7 +187,7 @@ static void test_pid_get_comm_escape_one(const char *input, const char *output)
TEST(pid_get_comm_escape) {
_cleanup_free_ char *saved = NULL;
assert_se(pid_get_comm(0, &saved) >= 0);
ASSERT_OK(pid_get_comm(0, &saved));
test_pid_get_comm_escape_one("", "");
test_pid_get_comm_escape_one("foo", "foo");
@ -195,62 +200,62 @@ TEST(pid_get_comm_escape) {
test_pid_get_comm_escape_one("xxxxäöüß", "xxxx\\303\\244\\303\\266\\303\\274\\303\\237");
test_pid_get_comm_escape_one("xxxxxäöüß", "xxxxx\\303\\244\\303\\266\\303\\274\\303\\237");
assert_se(prctl(PR_SET_NAME, saved) >= 0);
ASSERT_OK_ERRNO(prctl(PR_SET_NAME, saved));
}
TEST(pid_is_unwaited) {
pid_t pid;
pid = fork();
assert_se(pid >= 0);
ASSERT_OK_ERRNO(pid);
if (pid == 0) {
_exit(EXIT_SUCCESS);
} else {
int status;
assert_se(waitpid(pid, &status, 0) == pid);
assert_se(pid_is_unwaited(pid) == 0);
ASSERT_OK_EQ_ERRNO(waitpid(pid, &status, 0), pid);
ASSERT_OK_ZERO(pid_is_unwaited(pid));
}
assert_se(pid_is_unwaited(getpid_cached()) > 0);
assert_se(pid_is_unwaited(-1) < 0);
ASSERT_OK_POSITIVE(pid_is_unwaited(getpid_cached()));
ASSERT_FAIL(pid_is_unwaited(-1));
}
TEST(pid_is_alive) {
pid_t pid;
pid = fork();
assert_se(pid >= 0);
ASSERT_OK_ERRNO(pid);
if (pid == 0) {
_exit(EXIT_SUCCESS);
} else {
int status;
assert_se(waitpid(pid, &status, 0) == pid);
assert_se(pid_is_alive(pid) == 0);
ASSERT_OK_EQ_ERRNO(waitpid(pid, &status, 0), pid);
ASSERT_OK_ZERO(pid_is_alive(pid));
}
assert_se(pid_is_alive(getpid_cached()) > 0);
assert_se(pid_is_alive(-1) < 0);
ASSERT_OK_POSITIVE(pid_is_alive(getpid_cached()));
ASSERT_FAIL(pid_is_alive(-1));
}
TEST(personality) {
assert_se(personality_to_string(PER_LINUX));
assert_se(!personality_to_string(PERSONALITY_INVALID));
ASSERT_NOT_NULL(personality_to_string(PER_LINUX));
ASSERT_NULL(personality_to_string(PERSONALITY_INVALID));
ASSERT_STREQ(personality_to_string(PER_LINUX), architecture_to_string(native_architecture()));
assert_se(personality_from_string(personality_to_string(PER_LINUX)) == PER_LINUX);
assert_se(personality_from_string(architecture_to_string(native_architecture())) == PER_LINUX);
ASSERT_EQ(personality_from_string(personality_to_string(PER_LINUX)), (unsigned long) PER_LINUX);
ASSERT_EQ(personality_from_string(architecture_to_string(native_architecture())), (unsigned long) PER_LINUX);
#ifdef __x86_64__
ASSERT_STREQ(personality_to_string(PER_LINUX), "x86-64");
ASSERT_STREQ(personality_to_string(PER_LINUX32), "x86");
assert_se(personality_from_string("x86-64") == PER_LINUX);
assert_se(personality_from_string("x86") == PER_LINUX32);
assert_se(personality_from_string("ia64") == PERSONALITY_INVALID);
assert_se(personality_from_string(NULL) == PERSONALITY_INVALID);
ASSERT_EQ(personality_from_string("x86-64"), (unsigned long) PER_LINUX);
ASSERT_EQ(personality_from_string("x86"), (unsigned long) PER_LINUX32);
ASSERT_EQ(personality_from_string("ia64"), PERSONALITY_INVALID);
ASSERT_EQ(personality_from_string(NULL), PERSONALITY_INVALID);
assert_se(personality_from_string(personality_to_string(PER_LINUX32)) == PER_LINUX32);
ASSERT_EQ(personality_from_string(personality_to_string(PER_LINUX32)), (unsigned long) PER_LINUX32);
#endif
}
@ -288,30 +293,31 @@ TEST(pid_get_cmdline_harder) {
(void) wait_for_terminate(pid, &si);
assert_se(si.si_code == CLD_EXITED);
assert_se(si.si_status == 0);
ASSERT_EQ(si.si_code, CLD_EXITED);
ASSERT_OK_ZERO(si.si_status);
return;
}
assert_se(pid == 0);
ASSERT_OK_ZERO(pid);
r = detach_mount_namespace();
if (r < 0) {
log_warning_errno(r, "detach mount namespace failed: %m");
assert_se(ERRNO_IS_PRIVILEGE(r));
if (!ERRNO_IS_PRIVILEGE(r))
ASSERT_OK(r);
return;
}
fd = mkostemp(path, O_CLOEXEC);
assert_se(fd >= 0);
ASSERT_OK_ERRNO(fd);
/* Note that we don't unmount the following bind-mount at the end of the test because the kernel
* will clear up its /proc/PID/ hierarchy automatically as soon as the test stops. */
if (mount(path, "/proc/self/cmdline", "bind", MS_BIND, NULL) < 0) {
/* This happens under selinux… Abort the test in this case. */
log_warning_errno(errno, "mount(..., \"/proc/self/cmdline\", \"bind\", ...) failed: %m");
assert_se(IN_SET(errno, EPERM, EACCES));
ASSERT_TRUE(IN_SET(errno, EPERM, EACCES));
return;
}
@ -320,197 +326,197 @@ TEST(pid_get_cmdline_harder) {
if (setrlimit(RLIMIT_STACK, &RLIMIT_MAKE_CONST(RLIM_INFINITY)) < 0)
log_warning("Testing without RLIMIT_STACK=infinity");
assert_se(unlink(path) >= 0);
ASSERT_OK_ERRNO(unlink(path));
assert_se(prctl(PR_SET_NAME, "testa") >= 0);
ASSERT_OK_ERRNO(prctl(PR_SET_NAME, "testa"));
assert_se(pid_get_cmdline(0, SIZE_MAX, 0, &line) == -ENOENT);
ASSERT_ERROR(pid_get_cmdline(0, SIZE_MAX, 0, &line), ENOENT);
assert_se(pid_get_cmdline(0, SIZE_MAX, PROCESS_CMDLINE_COMM_FALLBACK, &line) >= 0);
ASSERT_OK(pid_get_cmdline(0, SIZE_MAX, PROCESS_CMDLINE_COMM_FALLBACK, &line));
log_debug("'%s'", line);
ASSERT_STREQ(line, "[testa]");
line = mfree(line);
assert_se(pid_get_cmdline(0, SIZE_MAX, PROCESS_CMDLINE_COMM_FALLBACK | PROCESS_CMDLINE_QUOTE, &line) >= 0);
ASSERT_OK(pid_get_cmdline(0, SIZE_MAX, PROCESS_CMDLINE_COMM_FALLBACK | PROCESS_CMDLINE_QUOTE, &line));
log_debug("'%s'", line);
ASSERT_STREQ(line, "\"[testa]\""); /* quoting is enabled here */
line = mfree(line);
assert_se(pid_get_cmdline(0, 0, PROCESS_CMDLINE_COMM_FALLBACK, &line) >= 0);
ASSERT_OK(pid_get_cmdline(0, 0, PROCESS_CMDLINE_COMM_FALLBACK, &line));
log_debug("'%s'", line);
ASSERT_STREQ(line, "");
line = mfree(line);
assert_se(pid_get_cmdline(0, 1, PROCESS_CMDLINE_COMM_FALLBACK, &line) >= 0);
ASSERT_OK(pid_get_cmdline(0, 1, PROCESS_CMDLINE_COMM_FALLBACK, &line));
ASSERT_STREQ(line, "");
line = mfree(line);
assert_se(pid_get_cmdline(0, 2, PROCESS_CMDLINE_COMM_FALLBACK, &line) >= 0);
ASSERT_OK(pid_get_cmdline(0, 2, PROCESS_CMDLINE_COMM_FALLBACK, &line));
ASSERT_STREQ(line, "[…");
line = mfree(line);
assert_se(pid_get_cmdline(0, 3, PROCESS_CMDLINE_COMM_FALLBACK, &line) >= 0);
ASSERT_OK(pid_get_cmdline(0, 3, PROCESS_CMDLINE_COMM_FALLBACK, &line));
ASSERT_STREQ(line, "[t…");
line = mfree(line);
assert_se(pid_get_cmdline(0, 4, PROCESS_CMDLINE_COMM_FALLBACK, &line) >= 0);
ASSERT_OK(pid_get_cmdline(0, 4, PROCESS_CMDLINE_COMM_FALLBACK, &line));
ASSERT_STREQ(line, "[te…");
line = mfree(line);
assert_se(pid_get_cmdline(0, 5, PROCESS_CMDLINE_COMM_FALLBACK, &line) >= 0);
ASSERT_OK(pid_get_cmdline(0, 5, PROCESS_CMDLINE_COMM_FALLBACK, &line));
ASSERT_STREQ(line, "[tes…");
line = mfree(line);
assert_se(pid_get_cmdline(0, 6, PROCESS_CMDLINE_COMM_FALLBACK, &line) >= 0);
ASSERT_OK(pid_get_cmdline(0, 6, PROCESS_CMDLINE_COMM_FALLBACK, &line));
ASSERT_STREQ(line, "[test…");
line = mfree(line);
assert_se(pid_get_cmdline(0, 7, PROCESS_CMDLINE_COMM_FALLBACK, &line) >= 0);
ASSERT_OK(pid_get_cmdline(0, 7, PROCESS_CMDLINE_COMM_FALLBACK, &line));
ASSERT_STREQ(line, "[testa]");
line = mfree(line);
assert_se(pid_get_cmdline(0, 8, PROCESS_CMDLINE_COMM_FALLBACK, &line) >= 0);
ASSERT_OK(pid_get_cmdline(0, 8, PROCESS_CMDLINE_COMM_FALLBACK, &line));
ASSERT_STREQ(line, "[testa]");
line = mfree(line);
assert_se(pid_get_cmdline_strv(0, PROCESS_CMDLINE_COMM_FALLBACK, &args) >= 0);
assert_se(strv_equal(args, STRV_MAKE("[testa]")));
ASSERT_OK(pid_get_cmdline_strv(0, PROCESS_CMDLINE_COMM_FALLBACK, &args));
ASSERT_TRUE(strv_equal(args, STRV_MAKE("[testa]")));
args = strv_free(args);
/* Test with multiple arguments that don't require quoting */
assert_se(write(fd, "foo\0bar", 8) == 8);
ASSERT_OK_EQ_ERRNO(write(fd, "foo\0bar", 8), 8);
assert_se(pid_get_cmdline(0, SIZE_MAX, 0, &line) >= 0);
ASSERT_OK(pid_get_cmdline(0, SIZE_MAX, 0, &line));
log_debug("'%s'", line);
ASSERT_STREQ(line, "foo bar");
line = mfree(line);
assert_se(pid_get_cmdline(0, SIZE_MAX, PROCESS_CMDLINE_COMM_FALLBACK, &line) >= 0);
ASSERT_OK(pid_get_cmdline(0, SIZE_MAX, PROCESS_CMDLINE_COMM_FALLBACK, &line));
ASSERT_STREQ(line, "foo bar");
line = mfree(line);
assert_se(pid_get_cmdline_strv(0, PROCESS_CMDLINE_COMM_FALLBACK, &args) >= 0);
assert_se(strv_equal(args, STRV_MAKE("foo", "bar")));
ASSERT_OK(pid_get_cmdline_strv(0, PROCESS_CMDLINE_COMM_FALLBACK, &args));
ASSERT_TRUE(strv_equal(args, STRV_MAKE("foo", "bar")));
args = strv_free(args);
assert_se(write(fd, "quux", 4) == 4);
assert_se(pid_get_cmdline(0, SIZE_MAX, 0, &line) >= 0);
ASSERT_OK_EQ_ERRNO(write(fd, "quux", 4), 4);
ASSERT_OK(pid_get_cmdline(0, SIZE_MAX, 0, &line));
log_debug("'%s'", line);
ASSERT_STREQ(line, "foo bar quux");
line = mfree(line);
assert_se(pid_get_cmdline(0, SIZE_MAX, PROCESS_CMDLINE_COMM_FALLBACK, &line) >= 0);
ASSERT_OK(pid_get_cmdline(0, SIZE_MAX, PROCESS_CMDLINE_COMM_FALLBACK, &line));
log_debug("'%s'", line);
ASSERT_STREQ(line, "foo bar quux");
line = mfree(line);
assert_se(pid_get_cmdline(0, 1, PROCESS_CMDLINE_COMM_FALLBACK, &line) >= 0);
ASSERT_OK(pid_get_cmdline(0, 1, PROCESS_CMDLINE_COMM_FALLBACK, &line));
log_debug("'%s'", line);
ASSERT_STREQ(line, "");
line = mfree(line);
assert_se(pid_get_cmdline(0, 2, PROCESS_CMDLINE_COMM_FALLBACK, &line) >= 0);
ASSERT_OK(pid_get_cmdline(0, 2, PROCESS_CMDLINE_COMM_FALLBACK, &line));
log_debug("'%s'", line);
ASSERT_STREQ(line, "f…");
line = mfree(line);
assert_se(pid_get_cmdline(0, 3, PROCESS_CMDLINE_COMM_FALLBACK, &line) >= 0);
ASSERT_OK(pid_get_cmdline(0, 3, PROCESS_CMDLINE_COMM_FALLBACK, &line));
log_debug("'%s'", line);
ASSERT_STREQ(line, "fo…");
line = mfree(line);
assert_se(pid_get_cmdline(0, 4, PROCESS_CMDLINE_COMM_FALLBACK, &line) >= 0);
ASSERT_OK(pid_get_cmdline(0, 4, PROCESS_CMDLINE_COMM_FALLBACK, &line));
log_debug("'%s'", line);
ASSERT_STREQ(line, "foo…");
line = mfree(line);
assert_se(pid_get_cmdline(0, 5, PROCESS_CMDLINE_COMM_FALLBACK, &line) >= 0);
ASSERT_OK(pid_get_cmdline(0, 5, PROCESS_CMDLINE_COMM_FALLBACK, &line));
log_debug("'%s'", line);
ASSERT_STREQ(line, "foo …");
line = mfree(line);
assert_se(pid_get_cmdline(0, 6, PROCESS_CMDLINE_COMM_FALLBACK, &line) >= 0);
ASSERT_OK(pid_get_cmdline(0, 6, PROCESS_CMDLINE_COMM_FALLBACK, &line));
log_debug("'%s'", line);
ASSERT_STREQ(line, "foo b…");
line = mfree(line);
assert_se(pid_get_cmdline(0, 7, PROCESS_CMDLINE_COMM_FALLBACK, &line) >= 0);
ASSERT_OK(pid_get_cmdline(0, 7, PROCESS_CMDLINE_COMM_FALLBACK, &line));
log_debug("'%s'", line);
ASSERT_STREQ(line, "foo ba…");
line = mfree(line);
assert_se(pid_get_cmdline(0, 8, PROCESS_CMDLINE_COMM_FALLBACK, &line) >= 0);
ASSERT_OK(pid_get_cmdline(0, 8, PROCESS_CMDLINE_COMM_FALLBACK, &line));
log_debug("'%s'", line);
ASSERT_STREQ(line, "foo bar…");
line = mfree(line);
assert_se(pid_get_cmdline(0, 9, PROCESS_CMDLINE_COMM_FALLBACK, &line) >= 0);
ASSERT_OK(pid_get_cmdline(0, 9, PROCESS_CMDLINE_COMM_FALLBACK, &line));
log_debug("'%s'", line);
ASSERT_STREQ(line, "foo bar …");
line = mfree(line);
assert_se(pid_get_cmdline(0, 10, PROCESS_CMDLINE_COMM_FALLBACK, &line) >= 0);
ASSERT_OK(pid_get_cmdline(0, 10, PROCESS_CMDLINE_COMM_FALLBACK, &line));
log_debug("'%s'", line);
ASSERT_STREQ(line, "foo bar q…");
line = mfree(line);
assert_se(pid_get_cmdline(0, 11, PROCESS_CMDLINE_COMM_FALLBACK, &line) >= 0);
ASSERT_OK(pid_get_cmdline(0, 11, PROCESS_CMDLINE_COMM_FALLBACK, &line));
log_debug("'%s'", line);
ASSERT_STREQ(line, "foo bar qu…");
line = mfree(line);
assert_se(pid_get_cmdline(0, 12, PROCESS_CMDLINE_COMM_FALLBACK, &line) >= 0);
ASSERT_OK(pid_get_cmdline(0, 12, PROCESS_CMDLINE_COMM_FALLBACK, &line));
log_debug("'%s'", line);
ASSERT_STREQ(line, "foo bar quux");
line = mfree(line);
assert_se(pid_get_cmdline(0, 13, PROCESS_CMDLINE_COMM_FALLBACK, &line) >= 0);
ASSERT_OK(pid_get_cmdline(0, 13, PROCESS_CMDLINE_COMM_FALLBACK, &line));
log_debug("'%s'", line);
ASSERT_STREQ(line, "foo bar quux");
line = mfree(line);
assert_se(pid_get_cmdline(0, 14, PROCESS_CMDLINE_COMM_FALLBACK, &line) >= 0);
ASSERT_OK(pid_get_cmdline(0, 14, PROCESS_CMDLINE_COMM_FALLBACK, &line));
log_debug("'%s'", line);
ASSERT_STREQ(line, "foo bar quux");
line = mfree(line);
assert_se(pid_get_cmdline(0, 1000, PROCESS_CMDLINE_COMM_FALLBACK, &line) >= 0);
ASSERT_OK(pid_get_cmdline(0, 1000, PROCESS_CMDLINE_COMM_FALLBACK, &line));
log_debug("'%s'", line);
ASSERT_STREQ(line, "foo bar quux");
line = mfree(line);
assert_se(pid_get_cmdline_strv(0, PROCESS_CMDLINE_COMM_FALLBACK, &args) >= 0);
assert_se(strv_equal(args, STRV_MAKE("foo", "bar", "quux")));
ASSERT_OK(pid_get_cmdline_strv(0, PROCESS_CMDLINE_COMM_FALLBACK, &args));
ASSERT_TRUE(strv_equal(args, STRV_MAKE("foo", "bar", "quux")));
args = strv_free(args);
assert_se(ftruncate(fd, 0) >= 0);
assert_se(prctl(PR_SET_NAME, "aaaa bbbb cccc") >= 0);
ASSERT_OK_ERRNO(ftruncate(fd, 0));
ASSERT_OK_ERRNO(prctl(PR_SET_NAME, "aaaa bbbb cccc"));
assert_se(pid_get_cmdline(0, SIZE_MAX, 0, &line) == -ENOENT);
ASSERT_ERROR(pid_get_cmdline(0, SIZE_MAX, 0, &line), ENOENT);
assert_se(pid_get_cmdline(0, SIZE_MAX, PROCESS_CMDLINE_COMM_FALLBACK, &line) >= 0);
ASSERT_OK(pid_get_cmdline(0, SIZE_MAX, PROCESS_CMDLINE_COMM_FALLBACK, &line));
log_debug("'%s'", line);
ASSERT_STREQ(line, "[aaaa bbbb cccc]");
line = mfree(line);
assert_se(pid_get_cmdline(0, 10, PROCESS_CMDLINE_COMM_FALLBACK, &line) >= 0);
ASSERT_OK(pid_get_cmdline(0, 10, PROCESS_CMDLINE_COMM_FALLBACK, &line));
log_debug("'%s'", line);
ASSERT_STREQ(line, "[aaaa bbb…");
line = mfree(line);
assert_se(pid_get_cmdline(0, 11, PROCESS_CMDLINE_COMM_FALLBACK, &line) >= 0);
ASSERT_OK(pid_get_cmdline(0, 11, PROCESS_CMDLINE_COMM_FALLBACK, &line));
log_debug("'%s'", line);
ASSERT_STREQ(line, "[aaaa bbbb…");
line = mfree(line);
assert_se(pid_get_cmdline(0, 12, PROCESS_CMDLINE_COMM_FALLBACK, &line) >= 0);
ASSERT_OK(pid_get_cmdline(0, 12, PROCESS_CMDLINE_COMM_FALLBACK, &line));
log_debug("'%s'", line);
ASSERT_STREQ(line, "[aaaa bbbb …");
line = mfree(line);
assert_se(pid_get_cmdline_strv(0, PROCESS_CMDLINE_COMM_FALLBACK, &args) >= 0);
assert_se(strv_equal(args, STRV_MAKE("[aaaa bbbb cccc]")));
ASSERT_OK(pid_get_cmdline_strv(0, PROCESS_CMDLINE_COMM_FALLBACK, &args));
ASSERT_TRUE(strv_equal(args, STRV_MAKE("[aaaa bbbb cccc]")));
args = strv_free(args);
/* Test with multiple arguments that do require quoting */
@ -520,24 +526,24 @@ TEST(pid_get_cmdline_harder) {
#define EXPECT1p "foo $'\\'bar\\'' $'\"bar$\"' $'x y z' $'!``'"
#define EXPECT1v STRV_MAKE("foo", "'bar'", "\"bar$\"", "x y z", "!``")
assert_se(lseek(fd, SEEK_SET, 0) == 0);
assert_se(write(fd, CMDLINE1, sizeof CMDLINE1) == sizeof CMDLINE1);
assert_se(ftruncate(fd, sizeof CMDLINE1) == 0);
ASSERT_OK_ZERO_ERRNO(lseek(fd, SEEK_SET, 0));
ASSERT_OK_EQ_ERRNO(write(fd, CMDLINE1, sizeof(CMDLINE1)), (ssize_t) sizeof(CMDLINE1));
ASSERT_OK_ZERO_ERRNO(ftruncate(fd, sizeof(CMDLINE1)));
assert_se(pid_get_cmdline(0, SIZE_MAX, PROCESS_CMDLINE_QUOTE, &line) >= 0);
ASSERT_OK(pid_get_cmdline(0, SIZE_MAX, PROCESS_CMDLINE_QUOTE, &line));
log_debug("got: ==%s==", line);
log_debug("exp: ==%s==", EXPECT1);
ASSERT_STREQ(line, EXPECT1);
line = mfree(line);
assert_se(pid_get_cmdline(0, SIZE_MAX, PROCESS_CMDLINE_QUOTE_POSIX, &line) >= 0);
ASSERT_OK(pid_get_cmdline(0, SIZE_MAX, PROCESS_CMDLINE_QUOTE_POSIX, &line));
log_debug("got: ==%s==", line);
log_debug("exp: ==%s==", EXPECT1p);
ASSERT_STREQ(line, EXPECT1p);
line = mfree(line);
assert_se(pid_get_cmdline_strv(0, 0, &args) >= 0);
assert_se(strv_equal(args, EXPECT1v));
ASSERT_OK(pid_get_cmdline_strv(0, 0, &args));
ASSERT_TRUE(strv_equal(args, EXPECT1v));
args = strv_free(args);
#define CMDLINE2 "foo\0\1\2\3\0\0"
@ -545,24 +551,24 @@ TEST(pid_get_cmdline_harder) {
#define EXPECT2p "foo $'\\001\\002\\003'"
#define EXPECT2v STRV_MAKE("foo", "\1\2\3")
assert_se(lseek(fd, SEEK_SET, 0) == 0);
assert_se(write(fd, CMDLINE2, sizeof CMDLINE2) == sizeof CMDLINE2);
assert_se(ftruncate(fd, sizeof CMDLINE2) == 0);
ASSERT_OK_ZERO_ERRNO(lseek(fd, SEEK_SET, 0));
ASSERT_OK_EQ_ERRNO(write(fd, CMDLINE2, sizeof(CMDLINE2)), (ssize_t) sizeof(CMDLINE2));
ASSERT_OK_ZERO_ERRNO(ftruncate(fd, sizeof CMDLINE2));
assert_se(pid_get_cmdline(0, SIZE_MAX, PROCESS_CMDLINE_QUOTE, &line) >= 0);
ASSERT_OK(pid_get_cmdline(0, SIZE_MAX, PROCESS_CMDLINE_QUOTE, &line));
log_debug("got: ==%s==", line);
log_debug("exp: ==%s==", EXPECT2);
ASSERT_STREQ(line, EXPECT2);
line = mfree(line);
assert_se(pid_get_cmdline(0, SIZE_MAX, PROCESS_CMDLINE_QUOTE_POSIX, &line) >= 0);
ASSERT_OK(pid_get_cmdline(0, SIZE_MAX, PROCESS_CMDLINE_QUOTE_POSIX, &line));
log_debug("got: ==%s==", line);
log_debug("exp: ==%s==", EXPECT2p);
ASSERT_STREQ(line, EXPECT2p);
line = mfree(line);
assert_se(pid_get_cmdline_strv(0, 0, &args) >= 0);
assert_se(strv_equal(args, EXPECT2v));
ASSERT_OK(pid_get_cmdline_strv(0, 0, &args));
ASSERT_TRUE(strv_equal(args, EXPECT2v));
args = strv_free(args);
safe_close(fd);
@ -577,10 +583,11 @@ TEST(getpid_cached) {
b = getpid_cached();
c = getpid();
assert_se(a == b && a == c);
ASSERT_EQ(a, b);
ASSERT_EQ(a, c);
child = fork();
assert_se(child >= 0);
ASSERT_OK_ERRNO(child);
if (child == 0) {
/* In child */
@ -588,7 +595,8 @@ TEST(getpid_cached) {
b = getpid_cached();
c = getpid();
assert_se(a == b && a == c);
ASSERT_EQ(a, b);
ASSERT_EQ(a, c);
_exit(EXIT_SUCCESS);
}
@ -596,11 +604,13 @@ TEST(getpid_cached) {
e = getpid_cached();
f = getpid();
assert_se(a == d && a == e && a == f);
ASSERT_EQ(a, d);
ASSERT_EQ(a, e);
ASSERT_EQ(a, f);
assert_se(wait_for_terminate(child, &si) >= 0);
assert_se(si.si_status == 0);
assert_se(si.si_code == CLD_EXITED);
ASSERT_OK(wait_for_terminate(child, &si));
ASSERT_EQ(si.si_status, 0);
ASSERT_EQ(si.si_code, CLD_EXITED);
}
TEST(getpid_measure) {
@ -635,7 +645,7 @@ TEST(safe_fork) {
BLOCK_SIGNALS(SIGCHLD);
r = safe_fork("(test-child)", FORK_RESET_SIGNALS|FORK_CLOSE_ALL_FDS|FORK_DEATHSIG_SIGTERM|FORK_REARRANGE_STDIO|FORK_REOPEN_LOG, &pid);
assert_se(r >= 0);
ASSERT_OK(r);
if (r == 0) {
/* child */
@ -644,43 +654,42 @@ TEST(safe_fork) {
_exit(88);
}
assert_se(wait_for_terminate(pid, &status) >= 0);
assert_se(status.si_code == CLD_EXITED);
assert_se(status.si_status == 88);
ASSERT_OK(wait_for_terminate(pid, &status));
ASSERT_EQ(status.si_code, CLD_EXITED);
ASSERT_EQ(status.si_status, 88);
}
TEST(pid_to_ptr) {
assert_se(PTR_TO_PID(NULL) == 0);
ASSERT_EQ(PTR_TO_PID(NULL), 0);
ASSERT_NULL(PID_TO_PTR(0));
assert_se(PTR_TO_PID(PID_TO_PTR(1)) == 1);
assert_se(PTR_TO_PID(PID_TO_PTR(2)) == 2);
assert_se(PTR_TO_PID(PID_TO_PTR(-1)) == -1);
assert_se(PTR_TO_PID(PID_TO_PTR(-2)) == -2);
ASSERT_EQ(PTR_TO_PID(PID_TO_PTR(1)), 1);
ASSERT_EQ(PTR_TO_PID(PID_TO_PTR(2)), 2);
ASSERT_EQ(PTR_TO_PID(PID_TO_PTR(-1)), -1);
ASSERT_EQ(PTR_TO_PID(PID_TO_PTR(-2)), -2);
assert_se(PTR_TO_PID(PID_TO_PTR(INT16_MAX)) == INT16_MAX);
assert_se(PTR_TO_PID(PID_TO_PTR(INT16_MIN)) == INT16_MIN);
ASSERT_EQ(PTR_TO_PID(PID_TO_PTR(INT16_MAX)), INT16_MAX);
ASSERT_EQ(PTR_TO_PID(PID_TO_PTR(INT16_MIN)), INT16_MIN);
assert_se(PTR_TO_PID(PID_TO_PTR(INT32_MAX)) == INT32_MAX);
assert_se(PTR_TO_PID(PID_TO_PTR(INT32_MIN)) == INT32_MIN);
ASSERT_EQ(PTR_TO_PID(PID_TO_PTR(INT32_MAX)), INT32_MAX);
ASSERT_EQ(PTR_TO_PID(PID_TO_PTR(INT32_MIN)), INT32_MIN);
}
static void test_ioprio_class_from_to_string_one(const char *val, int expected, int normalized) {
assert_se(ioprio_class_from_string(val) == expected);
ASSERT_EQ(ioprio_class_from_string(val), expected);
if (expected >= 0) {
_cleanup_free_ char *s = NULL;
unsigned ret;
int combined;
assert_se(ioprio_class_to_string_alloc(expected, &s) == 0);
ASSERT_OK_ZERO(ioprio_class_to_string_alloc(expected, &s));
/* We sometimes get a class number and sometimes a name back */
assert_se(streq(s, val) ||
safe_atou(val, &ret) == 0);
ASSERT_TRUE(streq(s, val) || safe_atou(val, &ret) == 0);
/* Make sure normalization works, i.e. NONE → BE gets normalized */
combined = ioprio_normalize(ioprio_prio_value(expected, 0));
assert_se(ioprio_prio_class(combined) == normalized);
assert_se(expected != IOPRIO_CLASS_NONE || ioprio_prio_data(combined) == 4);
ASSERT_EQ(ioprio_prio_class(combined), normalized);
ASSERT_TRUE(expected != IOPRIO_CLASS_NONE || ioprio_prio_data(combined) == 4);
}
}
@ -701,8 +710,8 @@ TEST(setpriority_closest) {
int r;
r = safe_fork("(test-setprio)",
FORK_RESET_SIGNALS|FORK_CLOSE_ALL_FDS|FORK_DEATHSIG_SIGTERM|FORK_WAIT|FORK_LOG, NULL);
assert_se(r >= 0);
FORK_RESET_SIGNALS|FORK_CLOSE_ALL_FDS|FORK_DEATHSIG_SIGTERM|FORK_WAIT|FORK_LOG|FORK_REOPEN_LOG, NULL);
ASSERT_OK(r);
if (r == 0) {
bool full_test;
@ -713,16 +722,21 @@ TEST(setpriority_closest) {
if (setrlimit(RLIMIT_NICE, &RLIMIT_MAKE_CONST(30)) < 0) {
/* If this fails we are probably unprivileged or in a userns of some kind, let's skip
* the full test */
assert_se(ERRNO_IS_PRIVILEGE(errno));
if (!ERRNO_IS_PRIVILEGE(errno))
ASSERT_OK_ERRNO(-1);
full_test = false;
} else {
/* However, if the hard limit was above 30, setrlimit would succeed unprivileged, so
* check if the UID/GID can be changed before enabling the full test. */
if (setresgid(GID_NOBODY, GID_NOBODY, GID_NOBODY) < 0) {
assert_se(ERRNO_IS_PRIVILEGE(errno));
/* If the nobody user does not exist (user namespace) we get EINVAL. */
if (!ERRNO_IS_PRIVILEGE(errno) && errno != EINVAL)
ASSERT_OK_ERRNO(-1);
full_test = false;
} else if (setresuid(UID_NOBODY, UID_NOBODY, UID_NOBODY) < 0) {
assert_se(ERRNO_IS_PRIVILEGE(errno));
/* If the nobody user does not exist (user namespace) we get EINVAL. */
if (!ERRNO_IS_PRIVILEGE(errno) && errno != EINVAL)
ASSERT_OK_ERRNO(-1);
full_test = false;
} else
full_test = true;
@ -730,61 +744,69 @@ TEST(setpriority_closest) {
errno = 0;
p = getpriority(PRIO_PROCESS, 0);
assert_se(errno == 0);
ASSERT_EQ(errno, 0);
/* It should always be possible to set our nice level to the current one */
assert_se(setpriority_closest(p) > 0);
ASSERT_OK_POSITIVE(setpriority_closest(p));
errno = 0;
q = getpriority(PRIO_PROCESS, 0);
assert_se(errno == 0 && p == q);
ASSERT_EQ(errno, 0);
ASSERT_EQ(p, q);
/* It should also be possible to set the nice level to one higher */
if (p < PRIO_MAX-1) {
assert_se(setpriority_closest(++p) > 0);
ASSERT_OK_POSITIVE(setpriority_closest(++p));
errno = 0;
q = getpriority(PRIO_PROCESS, 0);
assert_se(errno == 0 && p == q);
ASSERT_EQ(errno, 0);
ASSERT_EQ(p, q);
}
/* It should also be possible to set the nice level to two higher */
if (p < PRIO_MAX-1) {
assert_se(setpriority_closest(++p) > 0);
ASSERT_OK_POSITIVE(setpriority_closest(++p));
errno = 0;
q = getpriority(PRIO_PROCESS, 0);
assert_se(errno == 0 && p == q);
ASSERT_EQ(errno, 0);
ASSERT_EQ(p, q);
}
if (full_test) {
/* These two should work, given the RLIMIT_NICE we set above */
assert_se(setpriority_closest(-10) > 0);
ASSERT_OK_POSITIVE(setpriority_closest(-10));
errno = 0;
q = getpriority(PRIO_PROCESS, 0);
assert_se(errno == 0 && q == -10);
ASSERT_EQ(errno, 0);
ASSERT_EQ(q, -10);
assert_se(setpriority_closest(-9) > 0);
ASSERT_OK_POSITIVE(setpriority_closest(-9));
errno = 0;
q = getpriority(PRIO_PROCESS, 0);
assert_se(errno == 0 && q == -9);
ASSERT_EQ(errno, 0);
ASSERT_EQ(q, -9);
/* This should succeed but should be clamped to the limit */
assert_se(setpriority_closest(-11) == 0);
ASSERT_OK_ZERO(setpriority_closest(-11));
errno = 0;
q = getpriority(PRIO_PROCESS, 0);
assert_se(errno == 0 && q == -10);
ASSERT_EQ(errno, 0);
ASSERT_EQ(q, -10);
assert_se(setpriority_closest(-8) > 0);
ASSERT_OK_POSITIVE(setpriority_closest(-8));
errno = 0;
q = getpriority(PRIO_PROCESS, 0);
assert_se(errno == 0 && q == -8);
ASSERT_EQ(errno, 0);
ASSERT_EQ(q, -8);
/* This should succeed but should be clamped to the limit */
assert_se(setpriority_closest(-12) == 0);
ASSERT_OK_ZERO(setpriority_closest(-12));
errno = 0;
q = getpriority(PRIO_PROCESS, 0);
assert_se(errno == 0 && q == -10);
ASSERT_EQ(errno, 0);
ASSERT_EQ(q, -10);
}
_exit(EXIT_SUCCESS);
@ -795,10 +817,10 @@ TEST(get_process_ppid) {
uint64_t limit;
int r;
assert_se(get_process_ppid(1, NULL) == -EADDRNOTAVAIL);
ASSERT_ERROR(get_process_ppid(1, NULL), EADDRNOTAVAIL);
/* the process with the PID above the global limit definitely doesn't exist. Verify that */
assert_se(procfs_get_pid_max(&limit) >= 0);
ASSERT_OK(procfs_get_pid_max(&limit));
log_debug("kernel.pid_max = %"PRIu64, limit);
if (limit < INT_MAX) {
@ -817,10 +839,10 @@ TEST(get_process_ppid) {
break;
}
assert_se(r >= 0);
ASSERT_OK(r);
assert_se(pid_get_cmdline(pid, SIZE_MAX, PROCESS_CMDLINE_COMM_FALLBACK, &c1) >= 0);
assert_se(pid_get_cmdline(ppid, SIZE_MAX, PROCESS_CMDLINE_COMM_FALLBACK, &c2) >= 0);
ASSERT_OK(pid_get_cmdline(pid, SIZE_MAX, PROCESS_CMDLINE_COMM_FALLBACK, &c1));
ASSERT_OK(pid_get_cmdline(ppid, SIZE_MAX, PROCESS_CMDLINE_COMM_FALLBACK, &c2));
log_info("Parent of " PID_FMT " (%s) is " PID_FMT " (%s).", pid, c1, ppid, c2);
@ -831,19 +853,20 @@ TEST(get_process_ppid) {
TEST(set_oom_score_adjust) {
int a, b, r;
assert_se(get_oom_score_adjust(&a) >= 0);
ASSERT_OK(get_oom_score_adjust(&a));
r = set_oom_score_adjust(OOM_SCORE_ADJ_MIN);
assert_se(r >= 0 || ERRNO_IS_PRIVILEGE(r));
if (!ERRNO_IS_PRIVILEGE(r))
ASSERT_OK(r);
if (r >= 0) {
assert_se(get_oom_score_adjust(&b) >= 0);
assert_se(b == OOM_SCORE_ADJ_MIN);
ASSERT_OK(get_oom_score_adjust(&b));
ASSERT_EQ(b, OOM_SCORE_ADJ_MIN);
}
assert_se(set_oom_score_adjust(a) >= 0);
assert_se(get_oom_score_adjust(&b) >= 0);
assert_se(b == a);
ASSERT_OK(set_oom_score_adjust(a));
ASSERT_OK(get_oom_score_adjust(&b));
ASSERT_EQ(b, a);
}
static void* dummy_thread(void *p) {
@ -851,10 +874,10 @@ static void* dummy_thread(void *p) {
char x;
/* let main thread know we are ready */
assert_se(write(fd, &(const char) { 'x' }, 1) == 1);
ASSERT_OK_EQ_ERRNO(write(fd, &(const char) { 'x' }, 1), 1);
/* wait for the main thread to tell us to shut down */
assert_se(read(fd, &x, 1) == 1);
ASSERT_OK_EQ_ERRNO(read(fd, &x, 1), 1);
return NULL;
}
@ -862,38 +885,42 @@ TEST(get_process_threads) {
int r;
/* Run this test in a child, so that we can guarantee there's exactly one thread around in the child */
r = safe_fork("(nthreads)", FORK_RESET_SIGNALS|FORK_DEATHSIG_SIGTERM|FORK_REOPEN_LOG|FORK_WAIT|FORK_LOG, NULL);
assert_se(r >= 0);
r = safe_fork("(nthreads)", FORK_RESET_SIGNALS|FORK_DEATHSIG_SIGTERM|FORK_WAIT|FORK_LOG, NULL);
ASSERT_OK(r);
if (r == 0) {
_cleanup_close_pair_ int pfd[2] = EBADF_PAIR, ppfd[2] = EBADF_PAIR;
pthread_t t, tt;
char x;
assert_se(socketpair(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC, 0, pfd) >= 0);
assert_se(socketpair(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC, 0, ppfd) >= 0);
ASSERT_OK_ERRNO(socketpair(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC, 0, pfd));
ASSERT_OK_ERRNO(socketpair(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC, 0, ppfd));
assert_se(get_process_threads(0) == 1);
assert_se(pthread_create(&t, NULL, &dummy_thread, FD_TO_PTR(pfd[0])) == 0);
assert_se(read(pfd[1], &x, 1) == 1);
assert_se(get_process_threads(0) == 2);
assert_se(pthread_create(&tt, NULL, &dummy_thread, FD_TO_PTR(ppfd[0])) == 0);
assert_se(read(ppfd[1], &x, 1) == 1);
assert_se(get_process_threads(0) == 3);
ASSERT_OK_EQ(get_process_threads(0), 1);
ASSERT_OK_ZERO_ERRNO(pthread_create(&t, NULL, &dummy_thread, FD_TO_PTR(pfd[0])));
ASSERT_OK_EQ_ERRNO(read(pfd[1], &x, 1), 1);
ASSERT_OK_EQ(get_process_threads(0), 2);
ASSERT_OK_ZERO_ERRNO(pthread_create(&tt, NULL, &dummy_thread, FD_TO_PTR(ppfd[0])));
ASSERT_OK_EQ_ERRNO(read(ppfd[1], &x, 1), 1);
ASSERT_OK_EQ(get_process_threads(0), 3);
assert_se(write(pfd[1], &(const char) { 'x' }, 1) == 1);
assert_se(pthread_join(t, NULL) == 0);
ASSERT_OK_EQ_ERRNO(write(pfd[1], &(const char) { 'x' }, 1), 1);
ASSERT_OK_ZERO_ERRNO(pthread_join(t, NULL));
/* the value reported via /proc/ is decreased asynchronously, and there appears to be no nice
* way to sync on it. Hence we do the weak >= 2 check, even though == 2 is what we'd actually
* like to check here */
assert_se(get_process_threads(0) >= 2);
r = get_process_threads(0);
ASSERT_OK(r);
ASSERT_GE(r, 2);
assert_se(write(ppfd[1], &(const char) { 'x' }, 1) == 1);
assert_se(pthread_join(tt, NULL) == 0);
ASSERT_OK_EQ_ERRNO(write(ppfd[1], &(const char) { 'x' }, 1), 1);
ASSERT_OK_ZERO_ERRNO(pthread_join(tt, NULL));
/* similar here */
assert_se(get_process_threads(0) >= 1);
r = get_process_threads(0);
ASSERT_OK(r);
ASSERT_GE(r, 1);
_exit(EXIT_SUCCESS);
}
@ -902,17 +929,17 @@ TEST(get_process_threads) {
TEST(is_reaper_process) {
int r;
r = safe_fork("(regular)", FORK_RESET_SIGNALS|FORK_CLOSE_ALL_FDS|FORK_WAIT, NULL);
assert_se(r >= 0);
r = safe_fork("(regular)", FORK_RESET_SIGNALS|FORK_CLOSE_ALL_FDS|FORK_REOPEN_LOG|FORK_WAIT, NULL);
ASSERT_OK(r);
if (r == 0) {
/* child */
assert_se(is_reaper_process() == 0);
ASSERT_OK_ZERO(is_reaper_process());
_exit(EXIT_SUCCESS);
}
r = safe_fork("(newpid)", FORK_RESET_SIGNALS|FORK_CLOSE_ALL_FDS|FORK_WAIT, NULL);
assert_se(r >= 0);
r = safe_fork("(newpid)", FORK_RESET_SIGNALS|FORK_CLOSE_ALL_FDS|FORK_REOPEN_LOG|FORK_WAIT, NULL);
ASSERT_OK(r);
if (r == 0) {
/* child */
@ -923,25 +950,25 @@ TEST(is_reaper_process) {
}
}
r = safe_fork("(newpid1)", FORK_RESET_SIGNALS|FORK_CLOSE_ALL_FDS|FORK_WAIT, NULL);
assert_se(r >= 0);
r = safe_fork("(newpid1)", FORK_RESET_SIGNALS|FORK_CLOSE_ALL_FDS|FORK_REOPEN_LOG|FORK_WAIT, NULL);
ASSERT_OK(r);
if (r == 0) {
/* grandchild, which is PID1 in a pidns */
assert_se(getpid_cached() == 1);
assert_se(is_reaper_process() > 0);
ASSERT_OK_EQ(getpid_cached(), 1);
ASSERT_OK_POSITIVE(is_reaper_process());
_exit(EXIT_SUCCESS);
}
_exit(EXIT_SUCCESS);
}
r = safe_fork("(subreaper)", FORK_RESET_SIGNALS|FORK_CLOSE_ALL_FDS|FORK_WAIT, NULL);
assert_se(r >= 0);
r = safe_fork("(subreaper)", FORK_RESET_SIGNALS|FORK_CLOSE_ALL_FDS|FORK_REOPEN_LOG|FORK_WAIT, NULL);
ASSERT_OK(r);
if (r == 0) {
/* child */
assert_se(make_reaper_process(true) >= 0);
ASSERT_OK(make_reaper_process(true));
assert_se(is_reaper_process() > 0);
ASSERT_OK_POSITIVE(is_reaper_process());
_exit(EXIT_SUCCESS);
}
}
@ -949,22 +976,22 @@ TEST(is_reaper_process) {
TEST(pid_get_start_time) {
_cleanup_(pidref_done) PidRef pidref = PIDREF_NULL;
assert_se(pidref_set_self(&pidref) >= 0);
ASSERT_OK(pidref_set_self(&pidref));
usec_t start_time;
assert_se(pidref_get_start_time(&pidref, &start_time) >= 0);
ASSERT_OK(pidref_get_start_time(&pidref, &start_time));
log_info("our starttime: " USEC_FMT, start_time);
_cleanup_(pidref_done_sigkill_wait) PidRef child = PIDREF_NULL;
assert_se(pidref_safe_fork("(stub)", FORK_RESET_SIGNALS|FORK_CLOSE_ALL_FDS, &child) >= 0);
ASSERT_OK(pidref_safe_fork("(stub)", FORK_RESET_SIGNALS|FORK_CLOSE_ALL_FDS|FORK_REOPEN_LOG, &child));
usec_t start_time2;
assert_se(pidref_get_start_time(&child, &start_time2) >= 0);
ASSERT_OK(pidref_get_start_time(&child, &start_time2));
log_info("child starttime: " USEC_FMT, start_time2);
assert_se(start_time2 >= start_time);
ASSERT_GE(start_time2, start_time);
}
static int intro(void) {

View File

@ -259,7 +259,7 @@ static int run(int argc, char *argv[]) {
if (r <= 0)
return r;
if (arg_graceful && tpm2_support() != TPM2_SUPPORT_FULL) {
if (arg_graceful && !tpm2_is_fully_supported()) {
log_notice("No complete TPM2 support detected, exiting gracefully.");
return EXIT_SUCCESS;
}

View File

@ -288,7 +288,7 @@ static int verb_info(int argc, char *argv[], void *userdata) {
pager_open(arg_pager_flags);
if (FLAGS_SET(arg_json_format_flags, SD_JSON_FORMAT_OFF)) {
static const struct sd_json_dispatch_field dispatch_table[] = {
static const sd_json_dispatch_field dispatch_table[] = {
{ "vendor", SD_JSON_VARIANT_STRING, sd_json_dispatch_const_string, offsetof(GetInfoData, vendor), SD_JSON_MANDATORY },
{ "product", SD_JSON_VARIANT_STRING, sd_json_dispatch_const_string, offsetof(GetInfoData, product), SD_JSON_MANDATORY },
{ "version", SD_JSON_VARIANT_STRING, sd_json_dispatch_const_string, offsetof(GetInfoData, version), SD_JSON_MANDATORY },
@ -380,12 +380,12 @@ static int verb_introspect(int argc, char *argv[], void *userdata) {
if (r < 0)
return r;
const struct sd_json_dispatch_field dispatch_table[] = {
{ "interfaces", SD_JSON_VARIANT_ARRAY, sd_json_dispatch_strv, PTR_TO_SIZE(&auto_interfaces), SD_JSON_MANDATORY },
static const sd_json_dispatch_field dispatch_table[] = {
{ "interfaces", SD_JSON_VARIANT_ARRAY, sd_json_dispatch_strv, 0, SD_JSON_MANDATORY },
{}
};
r = sd_json_dispatch(reply, dispatch_table, SD_JSON_LOG|SD_JSON_ALLOW_EXTENSIONS, NULL);
r = sd_json_dispatch(reply, dispatch_table, SD_JSON_LOG|SD_JSON_ALLOW_EXTENSIONS, &auto_interfaces);
if (r < 0)
return r;
@ -412,7 +412,7 @@ static int verb_introspect(int argc, char *argv[], void *userdata) {
return r;
if (FLAGS_SET(arg_json_format_flags, SD_JSON_FORMAT_OFF) || list_methods) {
static const struct sd_json_dispatch_field dispatch_table[] = {
static const sd_json_dispatch_field dispatch_table[] = {
{ "description", SD_JSON_VARIANT_STRING, sd_json_dispatch_const_string, 0, SD_JSON_MANDATORY },
{}
};

View File

@ -4,5 +4,6 @@ integration_tests += [
integration_test_template + {
'name' : fs.name(meson.current_source_dir()),
'storage': 'persistent',
'vm' : true,
},
]

View File

@ -21,6 +21,9 @@ at_exit() {
trap at_exit EXIT
systemctl unmask systemd-networkd.service
systemctl start systemd-networkd.service
export NETWORK_NAME="10-networkctl-test-$RANDOM.network"
export NETDEV_NAME="10-networkctl-test-$RANDOM.netdev"
export LINK_NAME="10-networkctl-test-$RANDOM.link"
@ -75,15 +78,6 @@ cmp "+4" "/etc/systemd/network/${NETWORK_NAME}.d/test.conf"
networkctl cat "$NETWORK_NAME" | grep '^# ' |
cmp - <(printf '%s\n' "# /etc/systemd/network/$NETWORK_NAME" "# /etc/systemd/network/${NETWORK_NAME}.d/test.conf")
networkctl edit --stdin --runtime "$NETDEV_NAME" <<EOF
[NetDev]
Name=test2
Kind=dummy
EOF
networkctl cat "$NETDEV_NAME" | grep -v '^# ' |
cmp - <(printf '%s\n' "[NetDev]" "Name=test2" "Kind=dummy")
cat >"/usr/lib/systemd/network/$LINK_NAME" <<EOF
[Match]
OriginalName=test2
@ -95,13 +89,23 @@ EOF
SYSTEMD_LOG_LEVEL=debug EDITOR='true' script -ec 'networkctl edit "$LINK_NAME"' /dev/null
cmp "/usr/lib/systemd/network/$LINK_NAME" "/etc/systemd/network/$LINK_NAME"
# Test links
systemctl unmask systemd-networkd
systemctl stop systemd-networkd
# The interface test2 does not exist, hence the below do not work.
(! networkctl cat @test2)
(! networkctl cat @test2:netdev)
(! networkctl cat @test2:link)
(! networkctl cat @test2:network)
systemctl start systemd-networkd
# create .netdev file at last, otherwise, the .link file will not be applied to the interface.
networkctl edit --stdin --runtime "$NETDEV_NAME" <<EOF
[NetDev]
Name=test2
Kind=dummy
EOF
networkctl cat "$NETDEV_NAME" | grep -v '^# ' |
cmp - <(printf '%s\n' "[NetDev]" "Name=test2" "Kind=dummy")
# wait for the interface being created and configured.
SYSTEMD_LOG_LEVEL=debug /usr/lib/systemd/systemd-networkd-wait-online -i test2:carrier --timeout 20
networkctl cat @test2:network | cmp - <(networkctl cat "$NETWORK_NAME")