bluestore allocator performance testing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi All,

Yesterday I ran through some quick tests looking at bluestore with stupid allocator and bitmap allocator, then compared with some existing jewel filestore benchmark results:

https://drive.google.com/file/d/0B2gTBZrkrnpZRWg2MTFhSk85b2M/view?usp=sharing

Both allocators resulted in similar read performance characteristics (not shown as I wanted to focus on writes). For writes they were quite different. Stupid allocator was faster than bitmap allocator for sequential writes, while bitmap allocator was faster than stupid allocator for most sizes of random writes. Bluestore in general was faster than filestore for large writes (due to the avoidance of journal writes), but was slower to varying degrees at small IO sizes. bitmap allocator appears to result in very good random write behavior down to ~32K IO sizes, so small IO performance may improve as better locking behavior and Allen's encode/decode proposal are implemented.

I think it would probably be worth spending a bit of time to understand why the sequential write performance with bitmap allocator is so much slower than stupid allocator.

Mark
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux