Re: ceph-osd performance on ram disk

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/09/2020 17:44, Mark Nelson wrote:

On 9/11/20 4:15 AM, George Shuklin wrote:
On 10/09/2020 19:37, Mark Nelson wrote:
On 9/10/20 11:03 AM, George Shuklin wrote:

...
Are there any knobs to tweak to see higher performance for ceph-osd? I'm pretty sure it's not any kind of leveling, GC or other 'iops-related' issues (brd has performance of two order of magnitude higher).


...

I've disabled CSTATE (governor=performance), it make no difference - same iops, same CPU use by ceph-osd  I've just can't force Ceph to consume more than 330% of CPU. I can force read up to 150k IOPS (both network and local), hitting CPU limit, but write is somewhat restricted by ceph itself.


Ok, can I assume block/db/wal are all on the ramdisk?  I'd start a benchmark and attach gdbpmp to the OSD and see if you can get a callgraph (1000 samples is nice if you don't mind waiting a bit). That will tell us a lot more about where the code is spending time.  It will slow the benchmark way down fwiw.  Some other things you could try:  Try to tweak the number of osd worker threads to better match the number of cores in your system.  Too many and you end up with context switching.  Too few and you limit parallelism.  You can also check rocksdb compaction stats in the osd logs using this tool:


https://github.com/ceph/cbt/blob/master/tools/ceph_rocksdb_log_parser.py


Given that you are on ramdisk the 1GB default WAL limit should be plenty to let you avoid WAL throttling during compaction, but just verifying that compactions are not taking a long time is good peace of mind.


Thank you very much for feedback. In my case all OSD data was on the brd device. (To test it just create a ramdisk: modprobe brd rd_size=20G, create pv and vg for ceph, and let ceph-ansible to consume them as OSD devices).

The stuff you've give me here is really cool, but a bit out of my skills now. I wrote them into my tasklist, and I'll continue to research this topic further.

Thank you for directions to look into.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux