SSDs for journals vs SSDs for a cache tier, which is better?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey,

Which one's "better": to use SSDs for storing journals, vs to use them as a writeback cache tier? All other things being equal.

The usecase is a 15 osd-node cluster, with 6 HDDs and 1 SSDs per node.
Used for block storage for a typical 20-hypervisor OpenStack cloud (with bunch of VMs running Linux). 10GigE public net + 10 GigE replication network.

Let's consider both cases:
Journals on SSDs - for writes, the write operation returns right after data lands on the Journal's SSDs, but before it's written to the backing HDD. So, for writes, SSD journal approach should be comparable to having a SSD cache tier. In both cases we're writing to an SSD (and to replica's SSDs), and returning to the client immediately after that. Data is only flushed to HDD later on.

However for reads (of hot data) I would expect a SSD Cache Tier to be faster/better. That's because, in the case of having journals on SSDs, even if data is in the journal, it's always read from the (slow) backing disk anyway, right? But with a SSD cache tier, if the data is hot, it would be read from the (fast) SSD.

I'm sure both approaches have their own merits, and might be better for some specific tasks, but with all other things being equal, I would expect that using SSDs as the "Writeback" cache tier should, on average, provide better performance than suing the same SSDs for Journals. Specifically in the area of read throughput/latency.

The main difference, I suspect, between the two approaches is that in the case of multiple HDDs (multiple ceph-osd processes), all of those processes share access to the same shared SSD storing their journals. Whereas it's likely not the case with Cache tiering, right? Though I must say I failed to find any detailed info on this. Any clarification will be appreciated.

So, is the above correct, or am I missing some pieces here? Any other major differences between the two approaches?

Thanks.
P.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux