If we're talking about RBD clients (qemu) then the number also grows with number of volumes attached to the client. With a single volume it was <1000. It grows when there's heavy IO happening in the guest. I had to bump up the file open limits to several thusands (8000 was it?) to accomodate client with 10 volumes in our cluster. We just scaled the number of OSDs down so hopefully I could have a graph of that. But I just guesstimated what it could become, and that's not necessarily what the theoretical limit is. Very bad things happen when you reach that threshold. It could also depend on the guest settings (like queue depth), and how much it seeks over the drive (how many different PGs it hits), but knowing the upper bound is most critical. Jan
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com