Re: SSD Primary Affinity

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 18/04/17 22:28, Anthony D'Atri wrote:
> I get digests, so please forgive me if this has been covered already.
> 
>> Assuming production level, we would keep a pretty close 1:2 SSD:HDD ratio,
> 
> 1:4-5 is common but depends on your needs and the devices in question, ie. assuming LFF drives and that you aren’t using crummy journals.
> 
>> First of all, is this even a valid architecture decision? 
> 
> Inktank described it to me back in 2014/2015 so I don’t think it’s ultra outré.  It does sound like a lot of work to maintain, especially when components get replaced or added.
> 
>> it should boost performance levels considerably compared to spinning disks,
> 
> Performance in which sense?  I would expect it to boost read performance but not so much writes.
> 
> I haven’t used cache tiering so can’t comment on the relative merits.  Your local workload may be a factor.
> 
> — aad

As it happens I've got a ceph cluster with a 1:2 SSD to HDD ratio and I did some fio testing a while ago with an SSD-primary pool to see how it performed, investigating as an alternative to a cache layer. Generally the results were as aad predicts - read performance for the pool was considerably better, almost as good as a pure SSD pool. Write performance was better but not so significantly improved, only going up to maybe 50% faster depending on the exact workload.

In the end I went with splitting the HDDs and SSDs into separate pools, and just using the SSD pool for VMs/datablocks which needed to be snappier. For most of my users it didn't matter that the backing pool was kind of slow, and only a few were wanting to do I/O intensive workloads where the speed was required, so putting so much of the data on the SSDs would have been something of a waste.

-- 
Richard Hesketh

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux