Re: NVMe and 2x Replica

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have just one more suggestion for you:

> but even our Supermicro contact that we worked the
> config out with was in agreement with 2x on NVMe

These kinds of settings aren't set in stone, it is a one line command
to rebalance (admittedly you wouldn't want to just do this casually).

I don't know your situation in any detail, but perhaps you could start
with size 3 and put off the size 2 decision until your cluster is
maybe 30% full... then you could make a final decision to either add
more storage or rebalance to size 2.

You can also have different size settings for different pools
depending on how important the data is.

Mark


On Thu, Feb 4, 2021 at 4:38 AM Adam Boyhan <adamb@xxxxxxxxxx> wrote:
>
> I know there is already a few threads about 2x replication but I wanted to start one dedicated to discussion on NVMe. There are some older threads, but nothing recent that addresses how the vendors are now pushing the idea of 2x.
>
> We are in the process of considering Ceph to replace our Nimble setup. We will have two completely separate clusters at two different sites that we are using rbd-mirror snapshot replication. The plan would be to run 2x replication on each cluster. 3x is still an option, but for obvious reasons 2x is enticing.
>
> Both clusters will be spot on to the super micro example in the white paper below.
>
> It seems all the big vendors feel 2x is safe with NVMe but I get the feeling this community feels otherwise. Trying to wrap my head around were the disconnect is between the big players and the community. I could be missing something, but even our Supermicro contact that we worked the config out with was in agreement with 2x on NVMe.
>
> Appreciate the input!
>
> [ https://www.supermicro.com/white_paper/white_paper_Ceph-Ultra.pdf | https://www.supermicro.com/white_paper/white_paper_Ceph-Ultra.pdf ]
>
> [ https://www.redhat.com/cms/managed-files/st-micron-ceph-performance-reference-architecture-f17294-201904-en.pdf ]
> [ https://www.redhat.com/cms/managed-files/st-micron-ceph-performance-reference-architecture-f17294-201904-en.pdf | https://www.redhat.com/cms/managed-files/st-micron-ceph-performance-reference-architecture-f17294-201904-en.pdf ]
>
> [ https://www.samsung.com/semiconductor/global.semi/file/resource/2020/05/redhat-ceph-whitepaper-0521.pdf | https://www.samsung.com/semiconductor/global.semi/file/resource/2020/05/redhat-ceph-whitepaper-0521.pdf ]
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux