CephFS Single Threaded Performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have a small test cluster (just two nodes) and after rebuilding it several times I found my latest configuration that SHOULD be the fastest is by far the slowest (per thread).


I have around 10 spinals that I have an erasure encoded CephFS on. When I installed several SSDs and recreated it with the meta data and the write cache on SSD my performance plummeted from about 10-20MBps to 2-3MBps, but only per thread… I did a rados benchmark and the SSDs Meta and Write pools can sustain anywhere from 50 to 150MBps without issue.


And, if I spool up multiple copies to the FS, each copy adds to that throughput without much of a hit. In fact I can go up to about 8 copied (about 16MBps) before they start slowing down at all. Even while I have several threads actively writing I still benchmark around 25MBps.


Any ideas why single threaded performance would take a hit like this? Almost everything is running on a single node (just a few OSDs on another node) and I have plenty of RAM (96GBs) and CPU (8 Xeon Cores).


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux