On 06/27/2012 01:10 PM, Jim Schutt wrote:
On 06/27/2012 12:48 PM, Stefan Priebe wrote:
Am 27.06.2012 20:38, schrieb Jim Schutt:
Actually, when my 166-client test is running,
"ps -o pid,nlwp,args -C ceph-osd"
tells me that I typically have ~1200 threads/OSD.
huh i see only 124 threads per OSD even with your settings.
FWIW:
2 threads/messenger (reader+writer):
166 clients
~200 OSD data peers (this depends on PG distribution/number)
~200 OSD heartbeat peers (ditto)
~200 OSD peers because I have 12 such servers
with 24 OSDs each.
-- Jim
plus
24 OSD op threads (my tuning)
24 OSD disk threads (my tuning)
6 OSD filestore op threads (my tuning)
So, 2*566 + 54 = 1186 threads/OSD
Plus, there's various other worker threads, such as the
timer, message dispatch threads, monitor/MDS messenger, etc.
Hmmm. The only other obvious difference, based on
what I remember from your other posts, is that you're
testing against RBD, right? I've been testing exclusively
with the Linux kernel client.
right and SSD. So it might be some timing issues.
I guess so.
-- Jim
Stefan
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html