Ceph SSD CPU Frequency Benchmarks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi All,

I know there has been lots of discussions around needing fast CPU's to get
the most out of SSD's. However I have never really ever seen an solid
numbers to make a comparison about how much difference a faster CPU makes
and if Ceph scales linearly with clockspeed. So I did a little experiment
today. 

I setup a 1 OSD Ceph instance on a Desktop PC. The Desktop has a i5
Sandbybridge CPU with the CPU turbo overclocked to 4.3ghz. By using the
userspace governor in Linux, I was able to set static clock speeds to see
the possible performance effects on Ceph. My pc only has an old X25M-G2 SSD,
so I had to limit the IO testing to 4kb QD=1, as otherwise the SSD ran out
of puff when I got to the higher clock speeds.

CPU Mhz	4Kb Write IO	Min Latency (us)	Avg Latency (us)	CPU
usr	CPU sys
1600		797		886			1250
10.14		2.35
2000		815		746			1222
8.45		1.82
2400		1161		630			857
9.5		1.6
2800		1227		549			812
8.74		1.24
3300		1320		482			755
7.87		1.08
4300		1548		437			644
7.72		0.9

The figures show a fairly linear trend right through the clock range and
clearly shows the importance of having fast CPU's (Ghz not cores) if you
want to achieve high IO, especially at low queue depths.


Things to Note
These figures are from a desktop CPU, no doubt Xeons will be slightly faster
at the same clock speed
I assuming using the userspace governor in this way is a realistic way to
simulate different CPU clock speeds?
My old SSD is probably skewing the figures slightly
I have complete control over the turbo settings and big cooling, many server
CPU's will limit the max turbo if multiple cores are under load or get too
hot
Ceph SSD OSD nodes are probably best with high end E3 CPU's as they have the
highest clock speeds
HDD's with Journals will probably benefit slightly from higher clock speeds,
if the disk isn't the bottleneck (ie small block sequential writes)
These numbers are for Replica=1, at 2 or 3 these numbers will be at least
half I would imagine


I hope someone finds this useful

Nick




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux