Re: Analysing ceph performance with SSD journal, 10gbe NIC and 2 replicas -Hammer release

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sorry for spam... I meant D_SYNC.

On Mon, Jan 9, 2017 at 2:56 PM, Brian Andrus <brian.andrus@xxxxxxxxxxxxx> wrote:
Hi Willem, the SSDs are probably fine for backing OSDs, it's the O_DSYNC writes they tend to lie about.

They may have a failure rate higher than enterprise-grade SSDs, but are otherwise suitable for use as OSDs if journals are placed elsewhere.

On Mon, Jan 9, 2017 at 2:39 PM, Willem Jan Withagen <wjw@xxxxxxxxxxx> wrote:
On 9-1-2017 18:46, Oliver Humpage wrote:
>
>> Why would you still be using journals when running fully OSDs on
>> SSDs?
>
> In our case, we use cheaper large SSDs for the data (Samsung 850 Pro
> 2TB), whose performance is excellent in the cluster, but as has been
> pointed out in this thread can lose data if power is suddenly
> removed.
>
> We therefore put journals onto SM863 SSDs (1 journal SSD per 3 OSD
> SSDs), which are enterprise quality and have power outage protection.
> This seems to balance speed, capacity, reliability and budget fairly
> well.

This would make me feel very uncomfortable.....

So you have a reliable journal, so upto there thing do work:
  Once in the journal you data is safe.

But then you async transfer the data to disk. And that is an SSD that
lies to you? It will tell you that the data is written. But if you pull
the power, then it turns out that the data is not really stored.

And then the only way to get the data consistent again, is to (deep)scrub.

Not a very appealing lookout??

--WjW


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Brian Andrus
Cloud Systems Engineer
DreamHost, LLC



--
Brian Andrus
Cloud Systems Engineer
DreamHost, LLC
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux