Re: ceph luminous bluestore poor random write performances

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Effectively performances are not so bad but they decrease a lot if you run the same test with 2/3 instances at the same time.
With iscsi on an emc unity with sas disks, performances  are a little more high. 
But they do not decrease so much when you run the same test with 2/3 instances at the same time.
Ignazio

Il Gio 2 Gen 2020, 11:19 Sinan Polat <sinan@xxxxxxxx> ha scritto:
Hi,

Your performance is not that bad, is it? What performance do you expect?

I just ran the same test.
12 Node, SATA SSD Only:
   READ: bw=63.8MiB/s (66.9MB/s), 63.8MiB/s-63.8MiB/s (66.9MB/s-66.9MB/s),
io=3070MiB (3219MB), run=48097-48097msec
  WRITE: bw=21.3MiB/s (22.4MB/s), 21.3MiB/s-21.3MiB/s (22.4MB/s-22.4MB/s),
io=1026MiB (1076MB), run=48097-48097msec

6 Node, SAS Only:
   READ: bw=22.1MiB/s (23.2MB/s), 22.1MiB/s-22.1MiB/s (23.2MB/s-23.2MB/s),
io=3070MiB (3219MB), run=138650-138650msec
  WRITE: bw=7578KiB/s (7759kB/s), 7578KiB/s-7578KiB/s (7759kB/s-7759kB/s),
io=1026MiB (1076MB), run=138650-138650msec

This is OpenStack Queens with Ceph FileStore (Luminous).

Kind regards,
Sinan Polat

> Op 2 januari 2020 om 10:59 schreef Stefan Kooman <stefan@xxxxxx>:
>
>
> Quoting Ignazio Cassano (ignaziocassano@xxxxxxxxx):
> > Hello All,
> > I installed ceph luminous with openstack, an using fio in a virtual machine
> > I got slow random writes:
> >
> > fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test
> > --filename=random_read_write.fio --bs=4k --iodepth=64 --size=4G
> > --readwrite=randrw --rwmixread=75
>
> Do you use virtio-scsi with a SCSI queue per virtual CPU core? How many
> cores do you have? I suspect that the queue depth is hampering
> throughput here ... but is throughput performance really interesting
> anyway for your use case? Low latency generally matters most.
>
> Gr. Stefan
>
>
> --
> | BIT BV  https://www.bit.nl/        Kamer van Koophandel 09090351
> | GPG: 0xD14839C6                   +31 318 648 688 / info@xxxxxx
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux