Re: mark out vs crush weight 0

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Thu, 19 May 2016 13:26:33 +0200 Oliver Dzombic wrote:

> Hi,
> 
> a sparedisk is a nice idea.
> 
> But i think thats something you can also do with a shellscript.
> 

Definitely, but you're then going to have a very likely possibility of
getting in conflict with your MONs and what they want to do.

For example you would have to query the running, active configuration of
your timeouts from the monitors to make sure you act before they do.

Doable, yes. Easy and 100% safe, not so much.

Christian

> Checking if an osd is down or out and just using your spare disk.
> 
> Maybe the programming ressources should not be used for something most
> of us can do with a simple shell script checking every 5 seconds the
> situation.
> 
> ----
> 
> Maybe better idea ( in my humble opinion ) is to solve this stuff by
> optimizing the code in recovery situations.
> 
> Currently we have things like
> 
> client-op-priority,
> recovery-op-priority,
> max-backfills,
> recovery-max-active and so on
> 
> to limit the performance impact in a recovery situation.
> 
> And still in a situation of recovery the performance go downhill ( a lot
> )  when all OSD's start to refill the to_be_recovered OSD.
> 
> In my case, i was removing old HDD's from a cluster.
> 
> If i down/out them ( 6 TB drives 40-50% full ) the cluster's performance
> will go down very dramatically. So i had to reduce the weight by 0.1
> steps to ease this pain, but could not remove it completely.
> 
> 
> So i think the tools / code to protect the cluster's performance ( even
> in recovery situation ) can be improved.
> 
> Of course, on one hand, we want to make sure, that asap the configured
> amount of replica's and this way, datasecurity is restored.
> 
> But on the other hand, it does not help too much if the recovery
> proceedure will impact the cluster's performance on a level where the
> useability is too much reduced.
> 
> So maybe introcude another config option to controle this ratio ?
> 
> To control more effectively how much IOPS/Bandwidth is used ( maybe
> streight in numbers in form of an IO ratelimit ) so that administrator's
> have the chance to config, according to the hardware environment, the
> "perfect" settings for their individual usecase.
> 
> 
> Because, right now, when i reduce the weight of a 6 TB HDD, while having
> ~ 30 OSD's in the cluster, from 1.0 to 0.9, around 3-5% of data will be
> moved around the cluster ( replication 2 ).
> 
> While its moving, there is a true performance hit on the virtual servers.
> 
> So if this could be solved, by a IOPS/HDD Bandwidth rate limit, that i
> can simply tell the cluster to use max. 10 IOPS and/or 10 MB/s for the
> recovery, then i think it would be a great help for any usecase and
> administrator.
> 
> Thanks !
> 
> 


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux