Re: Ceph replication factor of 2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Den tors 24 maj 2018 kl 00:20 skrev Jack <ceph@xxxxxxxxxxxxxx>:
Hi,

I have to say, this is a common yet worthless argument
If I have 3000 OSD, using 2 or 3 replica will not change much : the
probability of losing 2 devices is still "high"
On the other hand, if I have a small cluster, less than a hundred OSD,
that same probability become "low"

It's about losing the 2 or 3 OSDs that any particular PG is on that matters, not if there are 1000 other OSDs in the next rack.
Losing data is rather binary, its not a from 0.0 -> 1.0 scale. Either a piece of data is lost because its storage units are not there
or its not. Murphys law will make it so that this lost piece of data is rather important to you. And Murphy will of course pick the
2-3 OSDs that are the worst case for you.
 

I do not buy the "if someone is making a maintenance and a device fails"
either : this is a no-limit goal: what is X servers burns at the same
time ? What if an admin make a mistake and drop 5 OSD ? What is some
network tor or routers blow away ?
Should we do one replica par OSD ?


From my viewpoint, maintenance must happen. Unplanned maintenance will happen even if I wish it not to.
So the 2-vs-3 is about what situation you end up in when one replica is under (planned or not) maintenance.
Is this a "any surprise makes me lose data now" mode, or is it "many surprises need to occur"?
 

I would like people, especially the Ceph's devs and other people who
knows how it works deeply (read the code!) to give us their advices

How about listening to people who have lost data during 20+ year long careers in storage?
They will know a lot more on how the very improbable or "impossible" still happened to them
at the most unfortunate moment, regardless of what the code readers say.

This is all about weighing risks. If the risk for you is "ok, then I have to redownload that lost ubuntu-ISO again" its fine
to stick to data in only one place.

If the company goes out of business or at least faces 2 days total stop while some sleep-deprived admin tries the
bare metal restores for the first time of her life then the price of SATA disks to cover 3 replicas will be literally
nothing compared to that. 

To me it sounds like you are chasing some kind of validation of an answer you already have while asking the questions,
so if you want to go 2-replicas, then just do it. But you don't get to complain to ceph or ceph-users when you also figure
that the Mean-Time-Between-Failure ratings on the stickers of the disks is bogus and what you really needed was
"mean time between surprises", and thats always less than MTBF.

--
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux