Re: Erasure Coding failure domain (again)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 2 Apr 2019 19:04:28 +0900 Hector Martin wrote:

> On 02/04/2019 18.27, Christian Balzer wrote:
> > I did a quick peek at my test cluster (20 OSDs, 5 hosts) and a replica 2
> > pool with 1024 PGs.  
> 
> (20 choose 2) is 190, so you're never going to have more than that many 
> unique sets of OSDs.
> 
And this is why one shouldn't send mails when in a rush, w/o fully groking
the math one was just given. 
Thanks for setting me straight. 

> I just looked at the OSD distribution for a replica 3 pool across 48 
> OSDs with 4096 PGs that I have and the result is reasonable. There are 
> 3782 unique OSD tuples, out of (48 choose 3) = 17296 options. Since this 
> is a random process, due to the birthday paradox, some duplicates are 
> expected after only the order of 17296^0.5 = ~131 PGs; at 4096 PGs 
> having 3782 unique choices seems to pass the gut feeling test. Too lazy 
> to do the math closed form, but here's a quick simulation:
> 
>  >>> len(set(random.randrange(17296) for i in range(4096)))  
> 3671
> 
> So I'm actually slightly ahead.
> 
> At the numbers in my previous example (1500 OSDs, 50k pool PGs), 
> statistically you should get something like ~3 collisions on average, so 
> negligible.
> 
Sounds promising. 

> > Another thing to look at here is of course critical period and disk
> > failure probabilities, these guys explain the logic behind their
> > calculator, would be delighted if you could have a peek and comment.
> > 
> > https://www.memset.com/support/resources/raid-calculator/  
> 
> I'll take a look tonight :)
> 
Thanks, a look at the Backblaze disk failure rates (picking the worst
ones) gives a good insight into real life probabilities, too.
https://www.backblaze.com/blog/hard-drive-stats-for-2018/
If we go with 2%/year, that's an average failure ever 12 days.

Aside from how likely the actual failure rate is, another concern of
course is extended periods of the cluster being unhealthy, with certain
versions there was that "mon map will grow indefinitely" issue, other more
subtle ones might lurk still.

Christian
> -- 
> Hector Martin (hector@xxxxxxxxxxxxxx)
> Public Key: https://mrcn.st/pub
> 


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Rakuten Communications
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux