Re: "full ratio" - how does this work with multiple pools on seprate OSDs?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



OK - Thanks Greg.  This suggests to me that if you want to prevent the cluster from locking up, you need to monitor the "fullness" of each OSD, and not just the utilization of the entire cluster's capacity.  

It also suggests that if you want to remove a server from the cluster, you need to calculate how much capacity will be removed from the pools that utilize that server's capacity, and not just from the entire cluster.

-Tom

-----Original Message-----
From: Gregory Farnum [mailto:greg@xxxxxxxxxxx] 
Sent: Tuesday, March 04, 2014 10:10 AM
To: Barnes, Thomas J
Cc: ceph-users@xxxxxxxx
Subject: Re:  "full ratio" - how does this work with multiple pools on seprate OSDs?

The setting is calculated per-OSD, and if any OSD hits the hard limit the whole cluster transitions to the full state and stops accepting writes until the situation is resolved.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com


On Tue, Mar 4, 2014 at 9:58 AM, Barnes, Thomas J <thomas.j.barnes@xxxxxxxxx> wrote:
> I have a question about how "full ratio" works.
>
>
>
> How does a single "full ratio" setting work when the cluster has pools 
> associated with different drives?
>
>
>
> For example, let's say I have a cluster comprised of fifty 10K RPM 
> drives and fifty 7200 RPM drives.  I segregate the 10K drives and 
> 7200RM drives under separate buckets, create separate rulesets for 
> each bucket, and create separate pools for each bucket (using each buckets respective ruleset).
>
>
>
> What happens if one of the pools fills to capacity while the other 
> pool remains empty?
>
> How does the cluster respond when the OSDs in one pool become full 
> while the OSDs in other pools do not?
>
> Is full ratio calculated over the entire cluster or "by pool"?
>
>
>
> Thanks,
>
>
>
> -Tom
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux