Re: optane + 4x SSDs for VM disk images?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Could performance of Optane + 4x SSDs per node ever exceed that of
pure Optane disks?

No. With Ceph, the results for Optane and just for good server SSDs are
almost the same. One thing is that you can run more OSDs per an Optane
than per a usual SSD. However, the latency you get from both is almost
the same as most of it comes from Ceph itself, not from the underlying
storage. This also results in Optanes being useless for
block.db/block.wal if your SSDs aren't shitty desktop ones.

And as usual I'm posting the link to my article
https://yourcmc.ru/wiki/Ceph_performance :)

You write that they are not reporting QD=1 single-threaded numbers,
but in Table 10 and 11 the average latencies are reported which
is "close to the same", so they can get

Read latency: 0.32ms (thereby 3125 IOPS)
Write latency: 1.1ms  (therby 909 IOPS)

Really nice writeup and very true - should be a must-read for anyone
starting out with Ceph.

Thanks! :)

Tables 10 and 11 refer to QD=32 and 10 clients which is a significant load, at least because their CPUs were at 61.4% during the test. I think the latency with QD=1 and 1 client would be slightly better in their case (at least if they turned powersave off :)).

There is a new version of their "reference architecture" here: https://www.micron.com/-/media/client/global/documents/products/other-documents/micron_9300_and_red_hat_ceph_reference_architecture.pdf

The closest thing in the new PDF is 100 RBD clients with QD=1 and 70/30 mixed R/W. The numbers are messed up because they report the read latency of 0.72ms and the write latency of 0.37ms. This is probably reversed, it's the read that should be 0.37ms :) the write latency of 0.72ms looks real for their setup...

--
Vitaliy Filippov
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux