Re: maximum rebuild speed for erasure coding pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 
 > Fancy fast WAL/DB/Journals probably help a lot here, since they do 
affect the "iops"
 > you experience from your spin-drive OSDs.

What difference can be expected if you have a 100 iops hdd and you start 
using 
wal/db/journals on ssd? What would this 100 iops increase to 
(estimating)? 


-----Original Message-----
From: Janne Johansson [mailto:icepic.dz@xxxxxxxxx] 
Sent: donderdag 9 mei 2019 16:13
To: Feng Zhang
Cc: ceph-users
Subject: Re:  maximum rebuild speed for erasure coding pool

Den tors 9 maj 2019 kl 15:46 skrev Feng Zhang <prod.feng@xxxxxxxxx>:



	For erasure pool, suppose I have 10 nodes, each has 10 6TB drives, 
so
	in total 100 drives. I make a 4+2 erasure pool, failure domain is
	host/node. Then if one drive failed, (assume the 6TB is fully 
used),
	what the maximum speed the recovering process can have? Also 
suppose
	the cluster network is 10GbE, each disk has maximum 200MB/s 
sequential
	throughput.
	



I think IOPS is what I experience makes the largest impact during 
recovery of spinning drives, not speed of sequential perf, so when 
recoverying you will see progress (on older clusters at least) in terms 
of number of misplaced/degraded objects like:

misplaced    3454/34989583 objects (0.0123%)  

and the first number (here is my guess) moves at the speed of the IOPS 
of the drives being repaired, so if your drive(s) from which you rebuild 
can do 100 IOPS, then the above scenario will take ~34 seconds, even if 
that sizes and raw speeds should indicate something else about how fast 
they could move 0.0123% of your stored data.

As soon as you get super fast SSDs and NVMEs, the limit moves somewhere 
else since they have crazy IOPS numbers, and hence will repair lots 
faster, but if you have only spinning drives, then "don't hold your 
breath" is good advice, since it will take longer than 6 TB divided by 
200MB/s (8h20m) if you are unlucky and other drives can't help out in 
the rebuild.

Fancy fast WAL/DB/Journals probably help a lot here, since they do 
affect the "iops"
you experience from your spin-drive OSDs.

-- 

May the most significant bit of your life be positive.



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux