Thanks a lot, Marc - this looks similar to the post I found:
It seems to suggest that this wouldn't be an issue in more recent kernels but would be great to get confirmation on that. I'll keep researching.
On Thu, 11 Apr 2019 at 19:50, Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx> wrote:
AFAIK you at least risk with cephfs on osd nodes this 'kernel deadlock'?
I have it also, but with enough memory. Search mailing list for this.
I am looking at similar setup, but with mesos and strugling with some
cni plugin we have to develop.
-----Original Message-----
From: Bob Farrell [mailto:bob@xxxxxxxxxxxxxx]
Sent: donderdag 11 april 2019 20:45
To: ceph-users@xxxxxxxxxxxxxx
Subject: Topology query
Hello. I am running Ceph Nautilus v14.2.0 on Ubuntu Bionic 18.04 LTS.
I would like to ask if anybody could advise if there will be any
potential problems with my setup as I am running a lot of services on
each node.
I have 8 large dedicated servers, each with two physical disks. All
servers run Docker Swarm and host numerous web applications.
I have also installed Ceph on each node (not in Docker). The secondary
disk on each server hosts an LVM volume which is dedicated to Ceph. Each
node runs one of each: osd, mon, mgr, mdss. I use CephFS to mount the
data into each node's filesystem, which is then accessed by numerous
containers via Docker bindmounts.
So far everything is working great but we haven't put anything under
heavy load. I googled around to see if there are any potential problems
with what I'm doing but couldn't find too much. There was one forum post
I read [but can't find now] which warned against this unless using very
latest glibc due to kernel fsync issues (IIRC) but this post was from
2014 so I hope I'm safe ?
Thanks for the great project - I got this far just from reading the docs
and writing my own Ansible script (wanted to learn Ceph properly). It's
really good stuff. : )
Cheers,
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com