Re: "full ratio" - how does this work with multiple pools on seprate OSDs?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The setting is calculated per-OSD, and if any OSD hits the hard limit
the whole cluster transitions to the full state and stops accepting
writes until the situation is resolved.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com


On Tue, Mar 4, 2014 at 9:58 AM, Barnes, Thomas J
<thomas.j.barnes@xxxxxxxxx> wrote:
> I have a question about how "full ratio" works.
>
>
>
> How does a single "full ratio" setting work when the cluster has pools
> associated with different drives?
>
>
>
> For example, let's say I have a cluster comprised of fifty 10K RPM drives
> and fifty 7200 RPM drives.  I segregate the 10K drives and 7200RM drives
> under separate buckets, create separate rulesets for each bucket, and create
> separate pools for each bucket (using each buckets respective ruleset).
>
>
>
> What happens if one of the pools fills to capacity while the other pool
> remains empty?
>
> How does the cluster respond when the OSDs in one pool become full while the
> OSDs in other pools do not?
>
> Is full ratio calculated over the entire cluster or "by pool"?
>
>
>
> Thanks,
>
>
>
> -Tom
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux