Re: tgt and krbd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Thursday, March 5, 2015, Nick Fisk <nick@xxxxxxxxxx> wrote:

Hi All,

 

Just a heads up after a day’s experimentation.

 

I believe tgt with its default settings has a small write cache when exporting a kernel mapped RBD. Doing some write tests I saw 4 times the write throughput when using tgt aio + krbd compared to tgt with the builtin librbd.

 

After running the following command against the LUN, which apparently disables write cache, Performance dropped back to what I am seeing using tgt+librbd and also the same as fio.

 

tgtadm --op update --mode logicalunit --tid 2 --lun 3 -P mode_page=8:0:18:0x10:0:0xff:0xff:0:0:0xff:0xff:0xff:0xff:0x80:0x14:0:0:0:0:0:0

 

>From that I can only deduce that using tgt + krbd in its default state is not 100% safe to use, especially in an HA environment.

 

Nick




Hey Nick,

tgt actually does not have any caches. No read, no write.  tgt's design is to passthrough all commands to the backend as efficiently as possible. 

http://lists.wpkg.org/pipermail/stgt/2013-May/005788.html

The configuration parameters just inform the initiators whether the backend storage has a cache. Clearly this makes a big difference for you.  What initiator are you using with this test?

Maybe the kernel is doing the caching.  What tuning parameters do you have on the krbd disk?

It could be that using aio is much more efficient. Maybe built in lib rbd isn't doing aio?

Jake
 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux