Re: Bluestore performance 50% of filestore

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 

On 2017-11-14 21:54, Milanov, Radoslav Nikiforov wrote:

Hi

We have 3 node, 27 OSDs cluster running Luminous 12.2.1

In filestore configuration there are 3 SSDs used for journals of 9 OSDs on each hosts (1 SSD has 3 journal paritions for 3 OSDs).

I’ve converted filestore to bluestore by wiping 1 host a time and waiting for recovery. SSDs now contain block-db – again one SSD serving 3 OSDs.

 

Cluster is used as storage for Openstack.

Running fio on a VM in that Openstack reveals bluestore performance almost twice slower than filestore.

fio --name fio_test_file --direct=1 --rw=randwrite --bs=4k --size=1G --numjobs=2 --time_based --runtime=180 --group_reporting

fio --name fio_test_file --direct=1 --rw=randread --bs=4k --size=1G --numjobs=2 --time_based --runtime=180 --group_reporting

 

 

Filestore

  write: io=3511.9MB, bw=19978KB/s, iops=4994, runt=180001msec

  write: io=3525.6MB, bw=20057KB/s, iops=5014, runt=180001msec

  write: io=3554.1MB, bw=20222KB/s, iops=5055, runt=180016msec

 

  read : io=1995.7MB, bw=11353KB/s, iops=2838, runt=180001msec

  read : io=1824.5MB, bw=10379KB/s, iops=2594, runt=180001msec

  read : io=1966.5MB, bw=11187KB/s, iops=2796, runt=180001msec

 

Bluestore

  write: io=1621.2MB, bw=9222.3KB/s, iops=2305, runt=180002msec

  write: io=1576.3MB, bw=8965.6KB/s, iops=2241, runt=180029msec

  write: io=1531.9MB, bw=8714.3KB/s, iops=2178, runt=180001msec

 

  read : io=1279.4MB, bw=7276.5KB/s, iops=1819, runt=180006msec

  read : io=773824KB, bw=4298.9KB/s, iops=1074, runt=180010msec

  read : io=1018.5MB, bw=5793.7KB/s, iops=1448, runt=180001msec

 

 

- Rado

 


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 


It will be useful to see how this filestore edge would perform when you increase your queue depth (threads/jobs). For example to 32 or 64. This would represent a more practical load.

I can see an extreme case if you have a cluster with a large number of OSDs and only 1 client thread that filestore may be faster: in this case when the client io hits an OSD it will not be as busy syncing its journal to hdd (which is the case under normal load), but again this is not a practical setup. 

/Maged

 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux