Re: threading requirements for librbd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Blair!

Last I heard you should budget for 2-3 fds per OSD. This only affects
Glance in our cloud -- the hypervisors run unlimited as root.

Here's our config in /etc/security/limits.d/91-nproc.conf:

glance     soft    nofile     32768
glance     hard    nofile     32768
glance     soft    nproc     32768

(no we don't have 10000 OSDs, but multi-attached volumes may or may
not reuse fds -- I think not but didn't check recently.)

Cheers, Dan


On Tue, Mar 8, 2016 at 1:32 PM, Blair Bethwaite
<blair.bethwaite@xxxxxxxxx> wrote:
> Hi all,
>
> Not getting very far with this query internally (RH), so hoping
> someone familiar with the code can spare me the C++ pain...
>
> We've hit soft thread count ulimits a couple of times with different
> Ceph clusters. The clients (Qemu/KVM guests on both Ubuntu and RHEL
> hosts) have hit the limit thanks to many socket fds to the Ceph
> cluster and then experienced weird (at least the first time) and
> difficult to debug (no qemu or libvirt logs) issues. The primary
> symptom seems to be an apparent IO hang in the guest with no
> well-defined trigger, i.e., the Ceph volumes seem to work initially
> but then somehow we hit the ulimit and no further guest IOs progress
> (iostat shows devices at 100% util but no IOPS).
>
> qemu.conf has a max_files setting for tuning the relevant system
> default ulimit on guests, but we've no idea what it needs to be (so
> for now have just gone very large).
>
> So, how many threads does librbd need? It seems to be relative to the
> size (#OSDs and/or #PGs) of the cluster, as in one case this issue
> popped up for a user with 10 RBD volumes attached to an OpenStack
> instance only after we added a handful of OSDs to expand the cluster
> (which pushed their qemu/kvm processes steady state fd usage from ~900
> to ~1100, past the 1024 default).
>
> --
> Cheers,
> ~Blairo
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux