Re: ceph nvme 2x replication

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



And btw EC with k=2, m=2, min_size=3, is also probably fine, and has
only x2 space cost.

-- dan

On Wed, Feb 19, 2020 at 5:46 PM Paul Emmerich <paul.emmerich@xxxxxxxx> wrote:
>
> x2 replication is perfectly fine as long as you also keep min_size at 2 ;)
>
> (But that means you're offline as soon as something is offline)
>
> Paul
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90
>
> On Wed, Feb 19, 2020 at 4:41 PM Wido den Hollander <wido@xxxxxxxx> wrote:
> >
> >
> >
> > On 2/19/20 3:17 PM, Frank R wrote:
> > > Hi all,
> > >
> > > I have noticed that RedHat is willing to support 2x replication with
> > > NVME drives. Additionally, I have seen CERN presentation where they
> > > use a 2x replication with NVME for a hyperconverged/HPC/CephFS
> > > solution.
> > >
> >
> > Don't do this if you care about your data. NVMe isn't anything better or
> > worse than SSDs. It's actually still an SSD, but we swapped the SATA/SAS
> > controller for NVMe, but it's still flash.
> >
> > > I would like to hear some opinions on whether this is really a good
> > > idea for production. Is this setup (NVME/2x replication) really only
> > > meant to be used for data that is backed up and/or can be lost without
> > > causing a catastrophe.
> > >
> >
> > Yes.
> >
> > You can still loose data due to a single drive failure or OSD crash.
> > Let's say you have an OSD/host down for maintenance or due to a network
> > outage. The OSD's device isn't lost, but it's unavailable.
> >
> > While that happens you loose another OSD, but this time you actually
> > loose the device due to a failure.
> >
> > Now you've lost data. Although you *think* you still have another OSD
> > which is in a healthy state. If you boot the OSD you'll find out it's
> > outdated because writes happened to the OSD you just lost.
> >
> > Result = data loss
> >
> > 2x replication is a bad thing in production if you care about your data.
> >
> > Wido
> >
> > > Thanks,
> > > Frank
> > > _______________________________________________
> > > ceph-users mailing list -- ceph-users@xxxxxxx
> > > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> > >
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux