Re: Query on sshd.socket sshd.service approaches

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Mar 06, 2024 at 09:31:52AM +0100, Lennart Poettering wrote:
> On Mi, 06.03.24 11:11, Shreenidhi Shedi (shreenidhi.shedi@xxxxxxxxxxxx) wrote:
> 
> > Hi All,
> >
> > What is the rationale behind using sshd.socket other than not keeping sshd
> > daemon running always and reducing memory consumption?
> 
> <...>
> 1. Traditional mode (i.e. no socket activation)
>    + connections are served immediately, minimal latency during
>      connection setup
>    - takes up resources all the time, even if not used
> 
> 2. Per-connection socket activation mode
>    + takes up almost no resources when not used
>    + zero state shared between connections
>    + robust updates: socket stays connectible throughout updates
>    + robust towards failures in sshd: the bad instance dies, but sshd
>      stays connectible in general
>    + resource accounting/enforcement separate for each connection
>    - slightly bigger latency for each connection coming in
>    - slightly more resources being used if many connections are
>      established in parallel, since each will get a whole sshd
>      instance of its own.
> 
> 3. Single-instance socket activation mode
>    + takes up almost no resources when not used
>    + robust updates: socket stays connectible throughout updates
> 
> > With sshd.socket, systemd does a fork/exec on each connection which is
> > expensive and with the sshd.service approach server will just connect with
> > the client which is less expensive and faster compared to
> > sshd.socket.
> 
> The question of course is how many SSH instances you serve every
> minute. My educated guess is that most SSH installations have a use
> pattern that's more on the "sporadic use" side of things. There are
> certainly heavy use scenarios though (e.g. let's say you are github
> and server git via sshd).

A more relevant source of problems here IMO is not the "fair use"
pattern, but the misuse pattern.

The per-connection template unit mode, unfortunately, is really unfit
for any machine with ssh daemons exposed to the IPv4 internet: within
several months of operation such a machine starts getting at least 3-5
unauthed connections a second from hierarchically and geographically
distributed sources. Those clients are probing for vulnerabilities and
dictionary passwords, they are doomed to never be authenticated on a
reasonable system, so this is junk traffic at the end of the day.

If sshd is deployed the classic way (№1 or №3), each junk connection is
accepted and possibly rate-limited by the sshd program itself, and the
pid1-manager's state is unaffected. Units are only created for
authorized connections via PAM hooks in the "session stack";
same goes for other accounting entities and resources.
If sshd is deployed the per-connection unit way (№2), each junk connection will
fiddle with system manager state, IOW make the machine create and
immediately destroy a unit: fork-exec, accounting and sandboxing setup
costs, etc. If the instance units for junk connections are not
automatically collected (e. g. via `CollectMode=inactive-or-failed`
property), this leads to unlimited memory use for pid1 on an unattended
machine (really bad), powered by external actors.

> I'd suggest to distros to default to mode
> 2, and alternatively support mode 3 if possible (and mode 1 if they
> don#t want to patch the support for mode 3 in)

So mode 2 only really makes sense for deployments which are only ever
accessible from intranets with little junk traffic.

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [LARTC]     [Bugtraq]     [Yosemite Forum]     [Photo]

  Powered by Linux