On 2020-01-13 19:38, Radoslaw Zarzynski wrote:
Hi Roman,
On Mon, Jan 13, 2020 at 5:36 PM Roman Penyaev <rpenyaev@xxxxxxx> wrote:
I do not understand. I talk about simple comparison metric for any
storage application - IOPS. Since both storage applications
(legacy-osd,
crimson-osd) share absolutely the same Ceph spec - that is a fair
choice.
That way you're actually thinking about IOPS from an OSD instance
*disregarding how much HW resources it spends* to serve your
workload. This comparison ignores absolutely fundamental difference
in architecture:
* crimson-osd is single-threaded at the moment. It won't eat more
than 1 CPU core. That's by design.
* ceph-osd is multi-threaded. By default single instance has up to 16
`tp_osd_tp` and 3 `msgr-worker-n` threads. This translates into
upper,
theoretical bound of 19 CPU cores. In practice it's of course much
lower but still far above than for crimson-osd.
Then probably it makes more sense to execute the same load but with
iodepth=1?
Otherwise it is not quite fair: legacy-osd is able to execute requests
in
parallel, but crimson-osd is not.
$ bin/rados bench -p test-pool 10 rand -t 1
legacy-osd: 34MB/s
crimson-osd: 53MB/s
At least this is fair and there is a noticeable difference.
Also I'm still curios how fast can be immediate completion of requests,
without leaving the transport layer and avoiding PG logic completely.
With 'osd_immediate_completions=true' option set (there is a patch in
the first email of this thread) the bandwidth is the same for both:
$ bin/rados bench -p test-pool 10 write -b 4096 --no-cleanup -t 1
legacy-osd: 63MB/s
crimson-osd: 63MB/s
Which, I would say, not quite impressive.
--
Roman
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx