Re: threading requirements for librbd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Are you interesting in the max FD count or max thread count?  You mention both in your email.

Right now qemu doesn't pool OSD connections when you have multiple RBD images connected to the same VM -- each image uses its own librbd/librados instance.  Since each image, at the worst case, might have to connect to each OSD and MON, that should be enough to determine the rough upper bound that you will require for qemu (e.g. 3 MONs, 1000 OSDs, max 10 RBD images per VM == roughly 10,000 socket connections worse case).

In regards to threads, as a rough guide, each socket connection currently requires two threads (at least until the newer messaging layer is adopted on the client-side) and each open RBD image requires a few threads for internal processing (i.e. cache writeback, AIO processing, etc).

-- 

Jason Dillaman 


----- Original Message -----
> From: "Blair Bethwaite" <blair.bethwaite@xxxxxxxxx>
> To: ceph-users@xxxxxxxxxxxxxx
> Sent: Tuesday, March 8, 2016 7:32:01 AM
> Subject:  threading requirements for librbd
> 
> Hi all,
> 
> Not getting very far with this query internally (RH), so hoping
> someone familiar with the code can spare me the C++ pain...
> 
> We've hit soft thread count ulimits a couple of times with different
> Ceph clusters. The clients (Qemu/KVM guests on both Ubuntu and RHEL
> hosts) have hit the limit thanks to many socket fds to the Ceph
> cluster and then experienced weird (at least the first time) and
> difficult to debug (no qemu or libvirt logs) issues. The primary
> symptom seems to be an apparent IO hang in the guest with no
> well-defined trigger, i.e., the Ceph volumes seem to work initially
> but then somehow we hit the ulimit and no further guest IOs progress
> (iostat shows devices at 100% util but no IOPS).
> 
> qemu.conf has a max_files setting for tuning the relevant system
> default ulimit on guests, but we've no idea what it needs to be (so
> for now have just gone very large).
> 
> So, how many threads does librbd need? It seems to be relative to the
> size (#OSDs and/or #PGs) of the cluster, as in one case this issue
> popped up for a user with 10 RBD volumes attached to an OpenStack
> instance only after we added a handful of OSDs to expand the cluster
> (which pushed their qemu/kvm processes steady state fd usage from ~900
> to ~1100, past the 1024 default).
> 
> --
> Cheers,
> ~Blairo
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux