Re: 3-node cluster with 3 x Intel Optane 900P - very low benchmarked performance (200 IOPS)?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Is it a question to me or Victor? :-)

I did test my drives, intel nvmes are capable of something like 95100 single thread iops.

10 марта 2019 г. 1:31:15 GMT+03:00, Martin Verges <martin.verges@xxxxxxxx> пишет:
Hello, 

did you test the performance of your individual drives? 

Here is a small snippet:
-----------------
DRIVE=/dev/XXX
smartctl --a $DRIVE
for i in 1 2 4 8 16; do echo "Test $i"; fio --filename=$DRIVE --direct=1 --sync=1 --rw=write --bs=4k --numjobs=$i --iodepth=1 --runtime=60 --time_based --group_reporting --name=journal-test; done
-----------------

Please share the results that we know what's possible with your hardware. 

--
Martin Verges
Managing director

Mobile: +49 174 9335695
E-Mail: martin.verges@xxxxxxxx
Chat: https://t.me/MartinVerges

croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263

Web: https://croit.io
YouTube: https://goo.gl/PGE1Bx

Vitaliy Filippov <vitalif@xxxxxxxxxx> schrieb am Sa., 9. März 2019, 21:09:
There are 2:

fio -ioengine=rbd -direct=1 -name=test -bs=4k -iodepth=1 -rw=randwrite 
-pool=bench -rbdname=testimg

fio -ioengine=rbd -direct=1 -name=test -bs=4k -iodepth=128 -rw=randwrite 
-pool=bench -rbdname=testimg

The first measures your min possible latency - it does not scale with the 
number of OSDs at all, but it's usually what real applications like DBMSes 
need.

The second measures your max possible random write throughput which you 
probably won't be able to utilize if you don't have enough VMs all writing 
in parallel.

--
With best regards,
   Vitaliy Filippov
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

--
With best regards,
Vitaliy Filippov
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux