Hi, I have a setup where my NFS server exports its /export/nfsroot for my diskless clients. To ease the deployment of this nfsroot, I'd like to be able to run ansible roles directly in there to install the packages and configurations directly in the nfsroot. I thought using an nspawn container would be a good way to achieve that. So I first symlinked my nfsroot like this: ln -s /export/nfsroot /var/lib/machines Then I created a very simple nfsroot.nspawn file (Exec.Boot=true, Network.Private=yes) and enabled this container. It works fine, if I enable SSH I can install whatever I want through ansible. Now my issue is this: there are some services that I'd like to enable for the diskless boot clients (lightdm will not start properly in my container), and some I'd like to only enable for the container (I don't need systemd-networkd in my clients because they will inherit their network configuration from the PXE stack). What is the best way to have a separate set of services that I enable when running as a container and when running from the diskless boot? The potential solutions I see are: - Masking the services individually on the kernel command line - Using two targets (container.target / nfsroot.target) which specify what they require and starting them from the kernel command line (after having disabled all the services) - Using systemd.preset(5) (maybe with a bind mount of a preset file in the container) - Creating an override file for each service with a line like ConditionVirtualization=container Maybe I'm missing an easier way? My preference would be to have a whitelist of services that I start when running in the container (which would probably only be sshd and systemd-networkd to allow ansible deployments), and ignore everything else. Thanks, -- Antoine Pietri _______________________________________________ systemd-devel mailing list systemd-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/systemd-devel