Re: optimize bluestore for random write i/o

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This workload is probably bottlenecked by rocksdb (since the small
writes are buffered there), so that's probably what needs tuning here.


Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

On Tue, Mar 5, 2019 at 9:29 AM Stefan Priebe - Profihost AG
<s.priebe@xxxxxxxxxxxx> wrote:
>
> Hello list,
>
> while the performance of sequential writes 4k on bluestore is very high
> and even higher than filestore i was wondering what i can do to optimize
> random pattern as well.
>
> While using:
> fio --rw=write --iodepth=32 --ioengine=libaio --bs=4k --numjobs=4
> --filename=/tmp/test --size=10G --runtime=60 --group_reporting
> --name=test --direct=1
>
> I get 36000 iop/s on bluestore while having 11500 on filestore.
>
> Using randwrite gives me 17000 on filestore and only 9500 on bluestore.
>
> This is on all flash / ssd running luminous 12.2.10.
>
> Greets,
> Stefan
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux