НА: Turning on rbd cache safely

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I test performance from inside VM using fio and a 64G test file,
located on the same volume with VM's rootfs.

fio 2.0.8 from Debian Wheezy repos was running with cmdline:

#fio --filename=/test/file --direct=1 --sync=0 --rw=write --bs=4k --runtime=60 \
--ioengine=aio --iodeph=32 --time_based --size64G --group_reporting 
--name=seqwr-test

At first I ran test with "cache=writeback" in VM disk definition I got ~19k iops,
with 95% latency <750us - rather good results. But really strange, that starting
this VM with "cache=none" results is better - ~27k iops! 

Running fio with --sync=1 option gives me ~90 iops with 95% latency <500 msec.
For me it's also too slow - 3 writes to journals (Intel S3700) + 6 net roundtrips 
(10Gbit) are obviously rising latency, but, as I think, not higherm than ten msecs.

I don't understand this results. :/

Megov Igor
CIO Yuterra


________________________________________
От: Alexandre DERUMIER <aderumier@xxxxxxxxx>
Отправлено: 5 мая 2015 г. 14:28
Кому: Межов Игорь Александрович
Копия: ceph-users
Тема: Re:  Turning on rbd cache safely

Hi,

rbd_cache is client config only,

so no need to restart osd.

if you set cache=writeback in libvirt, it'll enable it,
so you don't need to setup rbd_cache=true in ceph.conf.
(it should override it)


you can verify it enable, doing a sequantial write benchmark with 4k block.
you should have a lot more bandwidth with cache=writeback

----- Mail original -----
De: "Межов Игорь Александрович" <megov@xxxxxxxxxx>
À: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
Envoyé: Mardi 5 Mai 2015 13:09:48
Objet:  Turning on rbd cache safely

Hi!

After examining our running OSD configuration through an admin socket
we suddenly noticed, that "rbd_cache" parameter is set to "false". Till
that moment, I suppose, that rbd cache is entirly client-side feature,
and it is enabled with "cache=writeback" parameter in libvirt VM xml
definition. Now I think, that I'm wrong.

I plan to do this steps:

1. Add group of rbd cache parameters to master ceph.conf:
[client]
rbd cache = true
rbd cache writethrough until flush = true
rbd cache size = <256-1024Mb>
rbd cache max dirty = <0.5-0.75*rbd_cache_size>

2. Push config to all nodes:
ceph-deploy push config <nodeNN>

3. Set "noout" to cluster and restart all OSDs one-by-one, than change
back "noout".

4. Check config od running OSDs through admin socket

So, is that right and safe way to turn on rbd cache on a running prodction
cluster?

Do I need to specify the same rbd cache parameters on client hosts,
running VMs with librbd backend?

PS: We use Firefly 0.80.7 on Debian Wheezy x86_64, 5 nodes, 58 OSDs,
journals on SSD.

Thanks for your answers!



Megov Igor
CIO Yuterra
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux