Just following up since this thread went silent after a few comments showing similar concerns, but no explanation of the behavior. Can anyone point to some code or documentation which explains how to estimate the expected number of TCP connections a client would open based on read/write volume, # of volumes, # of OSDs in the pool, etc?
--
On Tue, Oct 27, 2015 at 5:05 AM, Dan van der Ster <dan@xxxxxxxxxxxxxx> wrote:
On Mon, Oct 26, 2015 at 10:48 PM, Jan Schermer <jan@xxxxxxxxxxx> wrote:
> If we're talking about RBD clients (qemu) then the number also grows with
> number of volumes attached to the client.
I never thought about that but it might explain a problem we have
where multiple attached volumes crashes an HV. I had assumed that
multiple volumes would reuse the same rados client instance, and thus
reuse the same connections to the OSDs.
-- dan
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com