On Wed, 13 Sep 2017, Jeff Layton wrote: > I recently hit the problem described in this bug when rolling some new > tests for cephfs: > > http://tracker.ceph.com/issues/20988 > > While I hit it in a testcase, I suspect something like ganesha or samba > could also hit this, as they can create several CephContexts in the > course of their duties and their teardown is not necessarily coordinated > in any way. > > The problem is basically that the CephContext is a random pile of stuff, > some of which has an effect on global objects (the lockdep stuff, in > particular, but there may be more). > > There are a few ways to fix it...we could just ensure that the lockdep > stuff is handled cleanly, but I wonder...is there any real-world > use-case for having multiple CephContexts in a single process image? > > It contains stuff like admin sockets, and certain threads are tied to > it, etc. This seems like the sort of thing that should just have a > single instance per process. Should we start moving things in that > direction, or should I look at fixing this another way? There are a couple of reasons why it's not a singleton: - It makes library usage awkward. Having "global" state in a shared lib is an antipattern, so librados was set up with create/destroy functions, which are built around the cct. In practice it means you can instantiate multiple independent clients in the same process, which is IMO useful. - We would eventually like to host multiple OSD daemons within the same process in order to multiplex the network IO across the same connections (reducing e.g. RDMA connection count) and allowing more efficient use of DPDK/SPDK. Moving from g_ceph_context -> explicit ccts is a step in that direction, and further away from any singletons. sage -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html