I'm on 0.94.5.
No, rbd cache is not enabled. Even if each Image creates only one extra thread, if I have tens of thousands of Image objects open, there will be tens of thousands of threads in my process. Practically speaking, am I not allowed to cache Image objects?
On Fri, Nov 20, 2015 at 8:24 PM, Haomai Wang <haomaiwang@xxxxxxxxx> wrote:
What's your ceph version?
Do you enable rbd cache? By default, each Image should only have one
extra thread(maybe we also should obsolete this?).
> _______________________________________________
On Sat, Nov 21, 2015 at 9:26 AM, Allen Liao <aliao.svsgames@xxxxxxxxx> wrote:
> I am developing a python application (using rbd.py) that requires querying
> information about tens of thousands of rbd images. I have noticed that the
> number of threads in my process grow linearly with each Image object that is
> created.
>
> After creating about 800 Image objects (that all share a single ioctx), my
> process already has more than 2000 threads. I get the thread count using
> `ps huH p <pid> | wc -l`.
>
> If I call close() on each Image object after operating on it then the
> threads are cleaned up. However, I want to cache these objects and reuse
> them since it is expensive to create tens of thousands of these objects all
> the time.
>
> Is it correct for librbd to create 4-5 threads for each Image object
> created?
>
> For example, I'm doing something similar to:
> -------------
>
> import rbd
> import rados
>
> cluster = rados.Rados(conffile='my_ceph.conf')
> cluster.connect()
> ioctx = cluster.open_ioctx('mypool')
>
> # With each object, new threads are created until close() is called
> image0 = rbd.Image(ioctx, 'myimage0')
> image1 = rbd.Image(ioctx, 'myimage1')
> ...
> image9000 = rbd.Image(ioctx, 'myimage9000')
>
>
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Best Regards,
Wheat
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com