On 05/30/2012 01:26 PM, Stefan Priebe wrote:
Hi Mark,
Am 30.05.2012 16:56, schrieb Mark Nelson:
On 05/30/2012 09:53 AM, Stefan Priebe wrote:
Am 30.05.2012 16:49, schrieb Mark Nelson:
You could try setting up a pool with a replication level of 1 and see
how that does. It will be faster in any event, but it would be
interesting to see how much faster.
is there an easier way than modifying the crush map?
>
something like:
ceph osd pool create POOL [pg_num [pgp_num]]
then:
ceph osd pool set POOL size VALUE
With pool size 1 the writes are constant around 112MB/s:
http://pastebin.com/raw.php?i=haDPNTfQ
So has it something todo with the replication?
Stefan
Well now that is interesting. Replication is pretty network heavy. In
addition to the client transfers to the OSDs, you have each OSD node
sending and receiving data from each other. Based on these results it
looks like you may be stalling waiting for data to replicate so the
client stops sending new requests. If you set the osd, filestore, and
messenger debugging up to like 20 you'll get a ton of info that may
provide more clues.
Otherwise, a while ago I started making a list of performance related
settings and tests that we (Inktank) may want to check for customers.
Note that this is a work in progress and the values may not be exactly
right yet. You could check and see if any of the networking settings
have changed on your setup between 3.0 and 3.4:
http://ceph.com/wiki/Performance_analysis
Also there was a thread a while back where Jim Schutt saw problems that
looked like disk performance issues due to tcp autotuning policy:
http://www.spinics.net/lists/ceph-devel/msg05049.html
That seemed to be more an issue with lots of clients and OSDs per node,
but I thought I'd mention it since some of the effects are similar.
Mark
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html