keyvaluestore backend metadata overhead

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, we've been experimenting with the keyvaluestore backend, and have found that, on every object write (e.g. with `rados put`), a single transaction is issued containing an additional 9 KeyValueDB writes, beyond those which constitute the object data.  Given the key names, these are clearly all metadata of some sort, but this poses a problem when the objects themselves are very small.  Given the default strip block size of 4 KiB, with objects of size 36 KiB or less, half or more of all key-value store writes are metadata writes.  With objects of size 4 KiB or less, the metadata overhead grows to 90%+.

Is there any way to reduce the number of metadata rows which must be written with each object?

(Alternatively, if there is a way to convince the OSD to issue multiple concurrent write transactions, that would also help.  But even with "keyvaluestore op threads" set as high as 64, and `rados bench` issuing 64 concurrent writes, we never see more than a single active write transaction on the (multithread-capable) backend.  Is there some other option we're missing?)
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux