Thank you, Mantas. With your very detailed response I was able to get something working. I also added StandardInput=socket to the service to give easy access to the command from the client. Here's the current configuration: # /etc/systemd/system/family.socket [Unit] Description=Socket to tickle to update family netboot config [Install] WantedBy=network-online.target [Socket] ListenStream=192.168.1.10:14987 # want to run a new job, aka service, for each connection. Accept=Yes BindToDevice=br0 # 2s is default TriggerLimitIntervalSec=5s # /etc/systemd/system/family@.service # [Socket] Accept=yes requires a multi-instance service, hence @ in file name. [Unit] Description=Update kernel netboot info for family system [Service] # not Type=oneshot for socket-activated Type=simple # next is the default. RemainAfterExit=no StandardInput=socket ExecStart=sh -c "cat >> /root/washere" # [Install] doesn't make sense for socket-activated services There was one thing I wrote originally that I didn't mean: >> in which the server >> generates a new port, communicates it to the client, and it is this >> second port that gets handed to the service? Or is the expectation >> that the client service will do that early on and close the original >> port? The last sentence should have read "service" not "client service". I had the server process in mind. But either way, I take your point that I have misremembered how it works. I think I must have formed an incorrect mental model in my much earlier (but post-TCP!) encounters with sockets. And, at least right now, I wasn't clear on the distinction between sockets and ports until you clarified. (This is within port-based networking; I realize that systemd sockets also include other connection methods than TCP ports). So thank you. Ross