Re: Analysing ceph performance with SSD journal, 10gbe NIC and 2 replicas -Hammer release

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Why would you still be using journals when running fully OSDs on SSDs?

In our case, we use cheaper large SSDs for the data (Samsung 850 Pro 2TB), whose performance is excellent in the cluster, but as has been pointed out in this thread can lose data if power is suddenly removed.

We therefore put journals onto SM863 SSDs (1 journal SSD per 3 OSD SSDs), which are enterprise quality and have power outage protection. This seems to balance speed, capacity, reliability and budget fairly well.

Perversely, for the HDD tier we *don’t* use separate journals, but instead use the controller’s capacitor-backed cache to buffer writes. You can’t actually tell the difference between the HDD and SSD tier for writes (on a 10Gb network). The caveat being you need a large cache and not too many HDDs in a node, but as throwaway bulk storage tucked into a mostly SSD cluster it works very well for us.

Oliver.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux