Re: Reducing I/O when increasing number of PGs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jan 23, 2014 at 3:35 AM, bf <bf31415@xxxxxxxxx> wrote:
> Gregory Farnum <greg@...> writes:
>
>
>> Yes, Ceph does all the heavy lifting. Multiple PGs with the same OSDs
>> can happen (eg, if you only have two OSDs, all PGs will be on both),
>> but it behaves about as well as is possible within the configuration
>> you give it.
>> -Greg
>> Software Engineer #42  <at>  http://inktank.com | http://ceph.com
>>
>
> Thanks Greg.  A few more questions-- again maybe obvious.
>
> If a pool defines a num of copies as 6, does that mean each PG will have
> 6 and only 6 OSDs?

Yes.

> How is the primary OSD determined per PG?

CRUSH determines that — at the moment it's simply the first OSD in the
generated list, but that's subject to change.

> When an OSD within a PG fails, presumably the PG would be operating outside
> of the target diversity/redundancy of the pool.   Does that PG start to search
> for another OSD that can be brought into the PG to satisfy the pool diversity/
> redundancy requirements?  Is there a way to tell when a PG is in this hazard
> state?

If an OSD gets marked out, then the map has changed and the CRUSH
algorithm outputs a different OSD in its place. There's no searching
involved, just a computation. Whenever a PG is not fully replicated,
it's marked "degraded" in the pg map.

For further details, you should check out the documentation
(ceph.com/docs) or the research papers
(http://ceph.com/resources/publications/).
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux