Re: Analysing ceph performance with SSD journal, 10gbe NIC and 2 replicas -Hammer release

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 9-1-2017 18:46, Oliver Humpage wrote:
> 
>> Why would you still be using journals when running fully OSDs on
>> SSDs?
> 
> In our case, we use cheaper large SSDs for the data (Samsung 850 Pro
> 2TB), whose performance is excellent in the cluster, but as has been
> pointed out in this thread can lose data if power is suddenly
> removed.
> 
> We therefore put journals onto SM863 SSDs (1 journal SSD per 3 OSD
> SSDs), which are enterprise quality and have power outage protection.
> This seems to balance speed, capacity, reliability and budget fairly
> well.

This would make me feel very uncomfortable.....

So you have a reliable journal, so upto there thing do work:
  Once in the journal you data is safe.

But then you async transfer the data to disk. And that is an SSD that
lies to you? It will tell you that the data is written. But if you pull
the power, then it turns out that the data is not really stored.

And then the only way to get the data consistent again, is to (deep)scrub.

Not a very appealing lookout??

--WjW


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux