Re: Ceph, container and memory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Mar 7, 2019 at 3:08 PM Patrick Donnelly <pdonnell@xxxxxxxxxx> wrote:
>
> On Thu, Mar 7, 2019 at 3:02 PM Sage Weil <sweil@xxxxxxxxxx> wrote:
> >
> > On Thu, 7 Mar 2019, Gregory Farnum wrote:
> > > > With that caveat, it seems like we should *also* look at the cgroup limit
> > > > * some factor (e.g., .8) as use it as a ceiling for *_memory_target...
> > >
> > > Even that probably has too many assumptions though. If we're in a
> > > cgroup in a non-Kubernetes/plain container context, there's every
> > > possibility that we aren't the only consumer of that cgroup's resource
> > > limits (for instance, all of the Ceph processes on a machine in one
> > > cgroup, or vhost-style process isolation with all of a tenant in one
> > > cgroup), so there would need to be another input telling us how much
> > > memory to target for any individual process. :/
> >
> > That's good point.  We could use it as a *ceiling*, though...
>
> In practice, I don't think anyone places more than one process in a
> cgroup like this. Who does this vhost-style isolation you're referring
> to?

I believe it's the original use case for cgroups — shared hosting
providers or anybody who was doing VPS or "light virtualization" or
whatever was giving each user a separate cgroup, but all their
processes ran within just the one.



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux