Re: optane + 4x SSDs for VM disk images?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The problem with caching is that if the performance delta between the
two storage types isn't large enough, the cost of the caching
algorithms and the complexity of managing everything outweigh the
performance gains.

With Optanes vs. SSDs, the main thing to consider is how busy the
devices are in the worst case.  Optanes have incredibly low latency
and therefore are great at being fast with smaller workloads (as
measured by the effective queue depth in iostat) -- but the Optanes
I've use typically max out at a queue depth of 15 or so.  SSDs aren't
as fast at single workoads but the ones I typically use in my ivory
tower work better when they are multitasking a lot -- and depending on
the type can easily outperform Optanes when there are many things
going on at once (in my case the preferred drive is the SN200/SN260
which is excellent at the high queue depth workloads).

So the short suggestion is: don't waste time with caching, and analyze
your actual workload a bit to be 100% sure where the bottleneck is.
Unless you require stupidly low latency and the fastest possible
performance for a single user, you will get more bang for your buck by
adding more SSDs.  If you're not using HDDs, Ceph is usually CPU bound
due to limitations in the threading model -- so keep that in mind too
(sounds like you already know this if you're doing 4 partitions per
Optane :) ).

Mark
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux