Re: RBD performance - tuning hints / parameter doc

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Josh,

thanks for the hint.
Can you please spend a view words about the meaing of these parameters ?
- filestore min/max sync interval = 	int/float ?	seconds ? of what ?
- filestore flusher = false
- filestore queue max ops = 10000	
	what is 'one op' ?	queue in front of what ?
- filestore op threads =	
	what are useful values here ?

- journal dio = true/false
- osd op threads = 
- osd disk threads = 


Kind Regards,
-Dieter


On Wed, Aug 29, 2012 at 07:37:36PM +0200, Josh Durgin wrote:
> On 08/29/2012 01:50 AM, Alexandre DERUMIER wrote:
> > Nice results !
> > (can you make same benchmark from a qemu-kvm guest with virtio-driver ?
> > I have made some bench some month ago with stephan priebe, and we never be able to have more than 20000iops, with a full ssd 3nodes cluster)
> >
> >>> How can I set the variables when the Journal data have go to the OSD ? (after X seconds and/or when Y %-full)
> > I think you can try to tune these values
> >
> > filestore max sync interval = 30
> > filestore min sync interval = 29
> > filestore flusher = false
> > filestore queue max ops = 10000
> 
> Increasing filestore_op_threads might help as well.
> 
> > ----- Mail original -----
> >
> > De: "Dieter Kasper" <d.kasper@xxxxxxxxxxxx>
> > À: ceph-devel@xxxxxxxxxxxxxxx
> > Cc: "Dieter Kasper (KD)" <d.kasper@xxxxxxxxxxxx>
> > Envoyé: Mardi 28 Août 2012 19:48:42
> > Objet: RBD performance - tuning hints
> >
> > Hi,
> >
> > on my 4-node system (SSD + 10GbE, see bench-config.txt for details)
> > I can observe a pretty nice rados bench performance
> > (see bench-rados.txt for details):
> >
> > Bandwidth (MB/sec): 961.710
> > Max bandwidth (MB/sec): 1040
> > Min bandwidth (MB/sec): 772
> >
> >
> > Also the bandwidth performance generated with
> > fio --filename=/dev/rbd1 --direct=1 --rw=$io --bs=$bs --size=2G --iodepth=$threads --ioengine=libaio --runtime=60 --group_reporting --name=file1 --output=fio_${io}_${bs}_${threads}
> >
> > .... is acceptable, e.g.
> > fio_write_4m_16 795 MB/s
> > fio_randwrite_8m_128 717 MB/s
> > fio_randwrite_8m_16 714 MB/s
> > fio_randwrite_2m_32 692 MB/s
> >
> >
> > But, the write IOPS seems to be limited around 19k ...
> > RBD 4M 64k (= optimal_io_size)
> > fio_randread_512_128 53286 55925
> > fio_randread_4k_128 51110 44382
> > fio_randread_8k_128 30854 29938
> > fio_randwrite_512_128 18888 2386
> > fio_randwrite_512_64 18844 2582
> > fio_randwrite_8k_64 17350 2445
> > (...)
> > fio_read_4k_128 10073 53151
> > fio_read_4k_64 9500 39757
> > fio_read_4k_32 9220 23650
> > (...)
> > fio_read_4k_16 9122 14322
> > fio_write_4k_128 2190 14306
> > fio_read_8k_32 706 13894
> > fio_write_4k_64 2197 12297
> > fio_write_8k_64 3563 11705
> > fio_write_8k_128 3444 11219
> >
> >
> > Any hints for tuning the IOPS (read and/or write) would be appreciated.
> >
> > How can I set the variables when the Journal data have go to the OSD ? (after X seconds and/or when Y %-full)
> >
> >
> > Kind Regards,
> > -Dieter
> >
> >
> >
> 
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux