Re: [PATCH v3 0/3] shmem: Allow userspace monitoring of tmpfs for lack of space.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Apr 19, 2022 at 6:29 PM Gabriel Krisman Bertazi
<krisman@xxxxxxxxxxxxx> wrote:
>
> Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> writes:
>
> Hi Andrew,
>
> > On Mon, 18 Apr 2022 17:37:10 -0400 Gabriel Krisman Bertazi <krisman@xxxxxxxxxxxxx> wrote:
> >
> >> When provisioning containerized applications, multiple very small tmpfs
> >
> > "files"?
>
> Actually, filesystems.  In cloud environments, we have several small
> tmpfs associated with containerized tasks.
>
> >> are used, for which one cannot always predict the proper file system
> >> size ahead of time.  We want to be able to reliably monitor filesystems
> >> for ENOSPC errors, without depending on the application being executed
> >> reporting the ENOSPC after a failure.
> >
> > Well that sucks.  We need a kernel-side workaround for applications
> > that fail to check and report storage errors?
> >
> > We could do this for every syscall in the kernel.  What's special about
> > tmpfs in this regard?
> >
> > Please provide additional justification and usage examples for such an
> > extraordinary thing.
>
> For a cloud provider deploying containerized applications, they might
> not control the application, so patching userspace wouldn't be a
> solution.  More importantly - and why this is shmem specific -
> they want to differentiate between a user getting ENOSPC due to
> insufficiently provisioned fs size, vs. due to running out of memory in
> a container, both of which return ENOSPC to the process.
>

Isn't there already a per memcg OOM handler that could be used by
orchestrator to detect the latter?

> A system administrator can then use this feature to monitor a fleet of
> containerized applications in a uniform way, detect provisioning issues
> caused by different reasons and address the deployment.
>
> I originally submitted this as a new fanotify event, but given the
> specificity of shmem, Amir suggested the interface I'm implementing
> here.  We've raised this discussion originally here:
>
> https://lore.kernel.org/linux-mm/CACGdZYLLCqzS4VLUHvzYG=rX3SEJaG7Vbs8_Wb_iUVSvXsqkxA@xxxxxxxxxxxxxx/
>

To put things in context, the points I was trying to make in this
discussion are:

1. Why isn't monitoring with statfs() a sufficient solution? and more
    specifically, the shared disk space provisioning problem does not sound
    very tmpfs specific to me.
    It is a well known issue for thin provisioned storage in environments
    with shared resources as the ones that you describe
2. OTOH, exporting internal fs stats via /sys/fs for debugging, health
monitoring
    or whatever seems legit to me and is widely practiced by other fs, so
    exposing those tmpfs stats as this patch set is doing seems fine to me.

Another point worth considering in favor of /sys/fs/tmpfs -
since tmpfs is FS_USERNS_MOUNT, the ability of sysadmin to monitor all
tmpfs mounts in the system and their usage is limited.

Therefore, having a central way to enumerate all tmpfs instances in the system
like blockdev fs instances and like fuse fs instances, does not sound
like a terrible
idea in general.

> > Whatever that action is, I see no user-facing documentation which
> > guides the user info how to take advantage of this?
>
> I can follow up with a new version with documentation, if we agree this
> feature makes sense.
>

Given the time of year and participants involved, shall we continue
this discussion
in LSFMM?

I am not sure if this even requires a shared FS/MM session, but I
don't mind trying
to allocate a shared FS/MM slot if Andrew and MM guys are interested
to take part
in the discussion.

As long as memcg is able to report OOM to the orchestrator, the problem does not
sound very tmpfs specific to me.

As Ted explained, cloud providers (for some reason) charge by disk size and not
by disk usage, so also for non-tmpfs, online growing the fs on demand could
prove to be a rewarding practice for cloud applications.

Thanks,
Amir.



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux