Re: ceph luminous bluestore poor random write performances

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Why do you think that is slow? That's 4.5k write iops and 13.5k read iops at the same time, that's amazing for a total of 30 HDDs.

It's actually way faster than you'd expect for 30 HDDs, so these DB devices are really helping there :)


Paul

--
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90


On Thu, Jan 2, 2020 at 12:14 PM Ignazio Cassano <ignaziocassano@xxxxxxxxx> wrote:
Hi Stefan, using fio with bs=64k I got very good performances.
I am not skilled on storage, but linux file system block size is 4k.
So, How can I modify the configuration on ceph to obtain best performances with bs=4k ?
Regards
Ignazio



Il giorno gio 2 gen 2020 alle ore 10:59 Stefan Kooman <stefan@xxxxxx> ha scritto:
Quoting Ignazio Cassano (ignaziocassano@xxxxxxxxx):
> Hello All,
> I installed ceph luminous with openstack, an using fio in a virtual machine
> I got slow random writes:
>
> fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test
> --filename=random_read_write.fio --bs=4k --iodepth=64 --size=4G
> --readwrite=randrw --rwmixread=75

Do you use virtio-scsi with a SCSI queue per virtual CPU core? How many
cores do you have? I suspect that the queue depth is hampering
throughput here ... but is throughput performance really interesting
anyway for your use case? Low latency generally matters most.

Gr. Stefan


--
| BIT BV  https://www.bit.nl/        Kamer van Koophandel 09090351
| GPG: 0xD14839C6                   +31 318 648 688 / info@xxxxxx
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux