Re: NVMe and 2x Replica

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> It seems all the big vendors feel 2x is safe with NVMe but
> I get the feeling this community feels otherwise

Definitely!

As someone who works for a big vendor (and I have since I worked at
Fusion-IO way back in the old days), IMO the correct way to phrase
this would probably be that "someone in technical marketing at the big
vendors" was convinced that 2x was safe enough to put in a white paper
or sales document.  They (we, I guess, since I'm one of these types of
people) are focused on performance and cost numbers and as much as I
hate to admit it, it can get in the way of long-term reliability
settings sometimes.

This doesn't mean that they are "wrong" -- these documents are
primarily meant to show the capabilities of their hardware, with a
bill of materials containing their part numbers.  It is expected that
end users will adjust a few things when it comes to a production
environment.

The idea that NVMe is safer than spinning rust drives is not
necessarily true -- and it's beside the point.  You are just as likely
to run into a weird situation where an OSD or pg acts up or disappears
for non-hardware reasons.

Unless you can live with "nine fives" instead of "five nines" (say, a
caching type of application where you can re-generate the data), use a
size of at least 3 -- and if you can't afford this much storage then
look at erasure coding schemes.

All of this is IMO of course,
Mark



On Thu, Feb 4, 2021 at 4:38 AM Adam Boyhan <adamb@xxxxxxxxxx> wrote:
>
> I know there is already a few threads about 2x replication but I wanted to start one dedicated to discussion on NVMe. There are some older threads, but nothing recent that addresses how the vendors are now pushing the idea of 2x.
>
> We are in the process of considering Ceph to replace our Nimble setup. We will have two completely separate clusters at two different sites that we are using rbd-mirror snapshot replication. The plan would be to run 2x replication on each cluster. 3x is still an option, but for obvious reasons 2x is enticing.
>
> Both clusters will be spot on to the super micro example in the white paper below.
>
> It seems all the big vendors feel 2x is safe with NVMe but I get the feeling this community feels otherwise. Trying to wrap my head around were the disconnect is between the big players and the community. I could be missing something, but even our Supermicro contact that we worked the config out with was in agreement with 2x on NVMe.
>
> Appreciate the input!
>
> [ https://www.supermicro.com/white_paper/white_paper_Ceph-Ultra.pdf | https://www.supermicro.com/white_paper/white_paper_Ceph-Ultra.pdf ]
>
> [ https://www.redhat.com/cms/managed-files/st-micron-ceph-performance-reference-architecture-f17294-201904-en.pdf ]
> [ https://www.redhat.com/cms/managed-files/st-micron-ceph-performance-reference-architecture-f17294-201904-en.pdf | https://www.redhat.com/cms/managed-files/st-micron-ceph-performance-reference-architecture-f17294-201904-en.pdf ]
>
> [ https://www.samsung.com/semiconductor/global.semi/file/resource/2020/05/redhat-ceph-whitepaper-0521.pdf | https://www.samsung.com/semiconductor/global.semi/file/resource/2020/05/redhat-ceph-whitepaper-0521.pdf ]
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux